Dataset Viewer
Auto-converted to Parquet Duplicate
paper_id
stringclasses
1 value
image
imagewidth (px)
32
900
markdown
stringclasses
1 value
2510.14528v1
# PaddleOCR-VL: Boosting Multilingual Document Parsing via a 0.9B Ultra-Compact Vision-Language Model Cheng Cui, Ting Sun, Suyin Liang, Tingquan Gao, Zelun Zhang, Jiaxuan Liu, Xueqing Wang, Changda Zhou, Hongen Liu, Manhui Lin, Yue Zhang, Yubo Zhang, Handong Zheng, Jing Zhang, Jun Zhang, Yi Liu, Dianhai Yu, Yanjun Ma PaddlePaddle Team, Baidu Inc. [email protected] Source Code: https://github.com/PaddlePaddle/PaddleOCR Models & Online Demo: https://huggingface.co/PaddlePaddle ## Abstract In this report, we propose PaddleOCR- VL, a SOTA and resource- efficient model tailored for document parsing. Its core component is PaddleOCR- VL- 0.9B, a compact yet powerful vision- language model (VLM) that integrates a NaViT- style dynamic resolution visual encoder with the ERNIE- 4.5- 0.3B language model to enable accurate element recognition. This innovative model efficiently supports 109 languages and excels in recognizing complex elements (e.g., text, tables, formulas, and charts), while maintaining minimal resource consumption. Through comprehensive evaluations on widely used public benchmarks and in- house benchmarks, PaddleOCR- VL achieves SOTA performance in both page- level document parsing and element- level recognition. It significantly outperforms existing solutions, exhibits strong competitiveness against top- tier VLMs, and delivers fast inference speeds. These strengths make it highly suitable for practical deployment in real- world scenarios. ![](images/0_0.jpg) <center>Figure 1 | Performance of PaddleOCR-VL on OmniDocBench v1.0 and v1.5. </center> <--- Page Split ---> ## Contents 1 Introduction 3 2 PaddleOCR-VL 4 2.1 Architecture 4 2.2 Training Recipe 7 3 Dataset 9 3.1 Data Curation 9 3.2 Automatic Data Annotation 10 3.3 Hard Cases Mining 10 4 Evaluation 10 4.1 Page-level Evaluation 11 4.2 Element-level Evaluation 13 4.3 Inference Performance 17 5 Conclusion 18 A Training Dataset Details 25 A.1 Text 25 A.2 Table 26 A.3 Formula 27 A.4 Chart 28 B Supported Languages 30 C Inference Performance on Different Hardware Configurations 31 D Real-world Samples 32 D.1 Comprehensive Document Parsing 33 D.2 Layout Detection 37 D.3 Reading Order 40 D.4 Text Recognition 42 D.5 Table Recognition 51 D.6 Formula Recognition 53 D.7 Chart Recognition 55 E Compare with Others 58 E.1 Layout Detection 59 E.2 Text Recognition 61 E.3 Table Recognition 67 E.4 Formula Recognition 69 E.5 Chart Recognition 70 <--- Page Split ---> ## 1. Introduction Documents serve as core information carriers, with their complexity and volume growing at an exponential rate, making document parsing an indispensable key technology. The primary goal of document parsing [1, 2, 3, 4] is to enable deep structural and semantic understanding of a document's layout. Specifically, it involves recognizing distinct text blocks and columns, distinguishing formulas, tables, charts, and images, determining the correct reading order, and detecting key elements (e.g., footnotes and image captions); these capabilities collectively lay a solid foundation for efficient information retrieval and data management. Furthermore, advanced document parsing enables large language models (LLMs) [5, 6, 7], especially when combined with Retrieval- Augmented Generation (RAG) [8], to access high- quality knowledge and enhance their practical applications. The inherent complexity of modern documents presents unique challenges: they often combine dense text, complex tables or chart, mathematical expressions, multiple languages and handwritten texts, with deserve layout structures. Recent research [1, 9, 10, 11, 12] in the field of document parsing primarily following two technological approaches. The first approach [9, 10] employs pipeline methodologies based on specialized, modular expert models. Although these methods offer strong performance, they are increasingly hindered by integration complexity, cumulative error propagation, and inherent limitations when handling highly complex documents. Secondly, end- to- end approaches [12, 13, 14] leveraging multimodal models aim to simplify the workflow and enable joint optimization. However, these methods often struggle with correct text order and can even generate hallucinations when faced with lengthy or complex layouts, while also incurring substantial computational overhead for long sequence outputs, thereby restricting their practical deployment. To address these advancements and challenges, we present PaddleOCR- VL, a high- performance, resource- efficient document parsing solution based on a vision- language model. This innovation paves the way for the widespread application of multimodal document parsing, particularly in resource- constrained environments. PaddleOCR- VL combines a robust layout analysis model with a compact yet powerful vision- language model, PaddleOCR- VL- 0.9B. Firstly, PaddleOCR- VL performs layout detection and reading order prediction to obtain the positional coordinates and reading order of elements (text blocks, tables, formulas, and charts). Compared to multimodal methods that rely on grounding and sequence output (e.g., MinerU2.5 [2], Dolphin [3]), our method offers faster inference speeds, lower training costs, and easier extensibility for new layout categories. Subsequently, the elements are segmented based on their positions and fed into PaddleOCR- VL- 0.9B for recognition. This vision- language model is specifically designed for resource- efficient inference and excels at element recognition within document parsing. By integrating a NaViT- style [15] dynamic high- resolution visual encoder with the lightweight ERNIE- 4.5- 0.3B [5] language model, we have significantly enhanced the model's dense text recognition capabilities and decoding efficiency. To train a powerful multimodal model, we have developed a high- quality training data construction pipeline. We collected over 30 million training samples through public data acquisition and data synthesis. We meticulously designed prompt engineering to guide the automatic labeling by general large models, based on the recognition results of expert models. Simultaneously, We performed data cleaning to remove low- quality or inconsistent annotations, such as those caused by model hallucinations. We designed an evaluation engine, which is an assessment collection that categorizes each element into more detailed categories. Through this automated evaluation, we can analyze the current model's training performance across different <--- Page Split ---> types. This allows us to conduct targeted hard sample mining based on element types and to construct similar challenging examples through data synthesis. Finally, we incorporated manual annotation for a small number of corner cases to complete the construction of the training data. Comprehensive benchmarking on the public benchmarks, including OmniDocBench v1.0, v1.5 [16] and olmOCR- Bench [12], and in- house ones demonstrate that PaddleOCR- VL achieves SOTA performance in document parsing task, significantly outperforming existing pipeline- based solutions and exhibiting strong competitiveness against leading vision- language models (VLMs). Moreover, PaddleOCR- VL is optimized for efficiency, delivering substantially lower latency and higher throughput than competing approaches. PaddleOCR- VL actively addresses current challenges in document processing with a high- performance, resource- efficient multimodal document parsing solution. Its key contributions include: - Compact yet Powerful VLM Architecture: We present a novel vision-language model that is specifically designed for resource-efficient inference, achieving outstanding performance in element recognition. By integrating a NaViT-style dynamic high-resolution visual encoder with the lightweight ERNIE-4.5-0.3B language model, we significantly enhance the model's recognition capabilities and decoding efficiency. This integration maintains high accuracy while reducing computational demands, making it well-suited for efficient and practical document processing applications.- High-quality Data Construction Methodology: We propose a systematic and comprehensive methodology for constructing high-quality datasets, providing a solid train data foundation for efficient and robust document parsing. This methodology not only enables us to construct high-quality data on demand, but also provides a new perspective on the automated generation of high-quality data.- SOTA Performance Document Parsing: PaddleOCR-VL achieves state-of-the-art performance in document parsing task. It excels in recognizing complex document elements, such as text, tables, formulas, and charts, making it suitable for a wide range of challenging content types, including handwritten text and historical documents. Supporting 109 languages, including major global languages and those with diverse scripts like Russian, Arabic, and Hindi, PaddleOCR-VL is highly applicable to multilingual and globalized document processing scenarios. ## 2. PaddleOCR-VL ### 2.1. Architecture PaddleOCR- VL decomposes the complex task of document parsing into a two stages, as illustrated in Figure 2. The first stage, PP- DocLayoutV2, is responsible for layout analysis, where it localizes semantic regions and predicts their reading order. Subsequently, the second stage, PaddleOCR- VL- 0.9B, leverages these layout predictions to perform fine- grained recognition of diverse content, including text, tables, formulas, and charts. Finally, a lightweight post- processing module aggregates the outputs from both stages and formats the final document into structured Markdown and JSON. #### 2.1.1. Layout Analysis Considering that end- to- end approaches based on VLM rely on long- sequence autoregressive processes, which result in high latency and memory consumption, and increase the risk of <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 2 | The overview of PaddleOCR-VL. </center> unstable layout analysis and hallucinations—problems that are particularly pronounced in multi- column or mixed text- graphic layouts—we employ a dedicated lightweight model for layout analysis, focusing specifically on element detection, classification, and reading order prediction. Specifically, we decouple the layout analysis process by introducing an independent model, PP- DocLayoutV2, dedicated solely to this task. PP- DocLayoutV2 consists of an object detection model (RT- DETR [17]) for elements localization and classification, as well as a lightweight pointer network [18] with six transformer layers to accurately predict the reading order of layout elements. This separation enables us to fully leverage the advanced capabilities of the vision model, which typically requires lower input image resolution, and contains significantly fewer parameters. As a result, it achieves stable and accurate layout analysis, without the instability issues that may arise in end- to- end approaches. ![](images/4_1.jpg) <center>Figure 3 | Architecture of layout analysis model. </center> Architecturally, PP- DocLayoutV2 is composed of two sequentially connected networks, as shown in Figure 3. The first is an RT- DETR- based [17] detection model that performs layout element detection and classification. The detected bounding boxes and class labels are then passed to a subsequent pointer network, which is responsible for ordering these layout elements. <--- Page Split ---> Specifically, we first apply per- class thresholds to select foreground proposals for the ordering network. The selected proposals are embedded using absolute 2D positional encodings and class label embeddings. Additionally, the encoder attention incorporates a geometric bias mechanism from Relation- DETR [18] to explicitly model pairwise geometric relationships among elements. The pairwise relation head linearly projects element representations into query and key vectors, then computes bilinear similarities to produce pairwise logits, resulting in an \(N \times N\) matrix that represents the relative order between each pair of elements. Finally, a deterministic win- accumulation decoding algorithm recovers a topologically consistent reading order for the detected layout elements. In comparison to other specialized models, such as LayoutReader [19], our model achieves higher performance with fewer parameters by efficiently extending RT- DETR [17] with a pointer network. #### 2.1.2. Element-level Recognition We systematically explore architecture configurations optimized for high accuracy and low computational overhead, and propose the PaddleOCR- VL- 0.9B as shown in Figure 4. ![](images/5_0.jpg) <center>Figure 4 | Architecture of PaddleOCR-VL-0.9B. </center> We adopted an architectural style inspired by LLaVA [20], integrating a pre- trained vision encoder with a dynamic resolution preprocessor, a randomly initialized 2- layer MLP projector, and a pre- trained large language model. Our architecture achieves a balance the scale of vision and language models to optimize performance in multi- elements recognition tasks. Compared to earlier document parsing models based on fixed- resolution or tiling- based approaches [4, 14, 21], our approach utilizes native dynamic high- resolution preprocessing. For the vision encoder, we employed a NaViT- style [15] encoder initialized from Keye- VL's [22] vision model, which support native- resolution inputs. This design enables the vision- language model to handle images of arbitrary resolution without distortion, yielding fewer hallucinations and stronger performance on text- intensive tasks. <--- Page Split ---> The projector is a randomly initialized 2- layer MLP with GELU [23] activation, incorporating a merge size of 2 to efficiently bridge visual features from the encoder to the language model's embedding space. In auto- regressive language models, the entire sequence is generated by predicting one token at a time. This approach means that the size of the decoder is directly linked to the overall inference latency, so a smaller model will decode faster. With this in mind, we use the ERNIE- 4.5- 0.3B [5] model, an open- source language model that balances a relatively small number of parameters with strong inference efficiency. In our implementation, we further enhance positional representation by incorporating a 3D- RoPE[24]. The combination of NaViT [15] with ERNIE- 4.5- 0.3B [5] has led to significant performance improvements in documents parsing, achieving minimal memory usage and faster inference speed. ### 2.2. Training Recipe The following sections introduce the training details of these two modules: PP- DocLayoutV2 for layout analysis and PaddleOCR- VL- 0.9B for element recognition. #### 2.2.1. Layout Analysis We employ the PP- DocLayoutV2 model to perform layout element localization, classification, and reading order prediction. PP- DocLayoutV2 extends RT- DETR [17] by incorporating an additional pointer network [18], which is responsible for predicting the reading order of detected elements. The training process adopts a two- stage strategy: we first train the core RT- DETR [17] model for layout detection and classification. Afterward, we freeze its parameters and independently train the pointer network for reading order prediction. For the first stage, we follow the training strategy of RT- DETR [17]. Specifically, we initialize the model with PP- DocLayout_Plus- L [25] pretrained weights and train it for 100 epochs on our self- constructed dataset comprising over 20,000 high- quality samples. For the second stage, specifically, the model outputs a matrix representing the pairwise ordering relationships between any two elements, and the Generalized Cross Entropy Loss [26] is computed with respect to the ground truth labels, as this loss function demonstrates increased robustness in scenarios where pre- annotated data are mixed into the dataset. We utilize a constant learning rate 2e- 4 and the AdamW optimizer to train 200 epochs. #### 2.2.2. Element-level Recognition As described in Section 2.1.2, PaddleOCR- VL- 0.9B consists of three modules: a vision encoder, a projector, and a language model. We adopt a post- adaptation strategy using pre- trained models. Specifically, the vision model is initialized with Keye- VL's weights, and the language model is initialized with ERNIE- 4.5- 0.3B's weights. The model is trained based on the ERNIEKit [27] repository and the training methodology for our VLM is divided into two stages, as outlined in Table 1. Stage 1: The initial stage focuses on pre- training alignment, where the model learns to associate visual information from images with corresponding textual representations. This crucial step is performed on a massive dataset comprising 29 million high- quality image- text pairs. During this phase, which runs for one epoch, the model is trained to establish a coherent <--- Page Split ---> Table 1 | Training settings in stage 1 and stage 2. <table><tr><td>Stages</td><td>Stage 1</td><td>Stage 2</td></tr><tr><td>Training Samples</td><td>29M</td><td>2.7M</td></tr><tr><td>Max Resolution</td><td>1280 × 28 × 28</td><td>2048 × 28 × 28</td></tr><tr><td>Sequence length</td><td>16384</td><td>16384</td></tr><tr><td>Trainable components</td><td>All</td><td>All</td></tr><tr><td>Batch sizes</td><td>128</td><td>128</td></tr><tr><td>Data Augmentation</td><td>Yes</td><td>Yes</td></tr><tr><td>Maximum LR</td><td>5 × 10−5</td><td>5 × 10−6</td></tr><tr><td>Minimum LR</td><td>5 × 10−6</td><td>5 × 10−7</td></tr><tr><td>Epoch</td><td>1</td><td>2</td></tr></table> understanding between diverse visual inputs and their semantic textual content. The training utilizes a batch size of 128, a sequence length of 16384, and supports a maximum image resolution of \(1280 \times 28 \times 28\) , with data augmentation enabled to improve robustness. For optimization, the learning rate is scheduled between a maximum of \(5 \times 10^{- 5}\) and a minimum of \(5 \times 10^{- 6}\) . The primary objective is to align the feature spaces of the vision encoder and the language model, enabling them to jointly process multimodal information effectively. This large- scale pre- training allows the model to capture intricate visual patterns, common textual structures, and their interdependencies across a vast range of contexts, laying a strong foundation for subsequent specialized tasks. Stage 2: Following pre- training, the model undergoes instruction fine- tuning to adapt its general multimodal understanding to specific downstream elements recognition tasks. This stage utilizes a meticulously curated dataset of 2.7 million samples, which is intentionally designed to be highly rich and diverse in its distribution. The training is conducted over two epochs, maintaining the 128 batch size and 16384 sequence length, but increasing the maximum resolution to \(2048 \times 28 \times 28\) to handle more detailed inputs. A finer learning rate is adopted, with the maximum and minimum values set to \(5 \times 10^{- 6}\) and \(5 \times 10^{- 7}\) , to carefully adjust the model on specialized data. The richness of this dataset encompasses a wide variety of document types, languages, writing systems, and visual complexities pertinent to real- world scenarios. During this fine- tuning phase, the model is trained with explicit instructions for four types of tasks: 1. OCR: This task fine-tunes the model to accurately identify and extract textual content from images, encompassing individual characters, words, text lines, text blocks and simple layout structure of page-level texts. 2. Table Recognition: The model learns to parse tabular structures within documents. This involves accurately extracting cell contents, identifying rows and columns, and recognize the logical relationships between different table elements, ultimately generating structured representations based on OTSL [28] format. 3. Formula Recognition: This instruction focuses on enabling the model to recognize and interpret mathematical and scientific formulas. It aims to convert their visual representation into a structured LATEX format and distinguishes between inline \(\backslash (..\backslash)\) and display \(\backslash [..\backslash ]\) equations. 4. Chart Recognition: This task trains the model to recognition information from various types of charts, such as bar charts, line graphs, and pie charts and convert Markdown format tables. <--- Page Split ---> ## 3. Dataset 3. DatasetTo build our high-quality and diverse training dataset, we propose a systematic methodology for constructing such datasets. As illustrated in Figure 5, we gather a diverse set of data from multiple sources to ensure comprehensive coverage. High-quality labels are then generated through automated annotation using large models, which guarantees precision and consistency. Additionally, we refine the training data by integrating challenging examples, which enhances the model's performance and robustness. Each of these crucial steps is detailed in the following sections. ![](images/8_0.jpg) <center>Figure 5 | The construction process of training data for PaddleOCR-VL-0.9B. </center> ## 3.1.Data Curation 3.1. Data CurationTo ensure the breadth and diversity of the dataset, data is collected from four main sources: open-source dataset, synthesizing dataset, network accessible dataset, and in-house dataset. 1. Open Source Dataset: As the foundation of our dataset, we systematically aggregated and curated a wide array of established public datasets. For textual content, we sourced data from the canonical dataset CASIA-HWDB [29]. Our mathematical expression data is derived from UniMER-1M [30] and MathWriting [31]. To ensure comprehensive coverage of data visualizations, we incorporated a rich spectrum of chart and graph datasets, including ChartQA [32], PlotQA [33], Chart2Text [34], DVQA [35], Unichart [36], Beagle [37], ChartINFO [38], visText [39], and ExcelChart [40]. Each of these sources underwent an initial filtering and cleaning protocol to rectify or discard noisy and low-quality annotations. 2. Data Synthesizing Dataset: Due to the naturally imbalanced distribution of public data, we employed a data synthesizing strategy to produce large volumes of missing data types at low cost, providing our proposed model with the unbiased document parsing performance. 3. Network Accessible Dataset: To improve model generalization and robustness against the complexities of unstructured real-world documents, we amassed an extensive corpus of publicly accessible data harvested from the Internet. This public collection was deliberately curated to encompass a rich spectrum of document types and visual styles. It includes <--- Page Split ---> academic papers, newspapers, formal scientific journal articles, scanned handwritten documents, diverse examination papers, and slides, etc. The integration of these varied sources proved instrumental in significantly broadening the stylistic, structural, and domain diversity of our training data, thereby mitigating the risk of overfitting to clean, canonical datasets. 4. In-house Dataset: Through years of research in the field of OCR, we have accumulated extensive datasets with diverse data types across all tasks of document parsing. We incorporate all in-house datasets into training with precisely controlled proportions, which have become unnecessary factors that enable our models to achieve outstanding performance. ### 3.2. Automatic Data Annotation After acquiring the raw data, we utilize an automatic data annotations process for large- scale labeling. Initially, we employ the expert model, PP- StructureV3, to conduct preliminary processing on the data, generating pseudo labels that may contain some inaccuracies. Subsequently, through prompt engineering, we create prompts that include the original images and their associated pseudo labels, which are then submitted to more advanced multimodal large language models, ERNIE- 4.5- VL [5] and Qwen2.5VL [24]. These sophisticated models refine and enhance the initial results by analyzing the image content, resulting in improved labels. Finally, to ensure the quality of the labels, the system performs a hallucination filtering step, which eliminates any potentially incorrect content generated by the large models, thereby producing reliable and high- quality labels. ### 3.3. Hard Cases Mining To overcome performance bottlenecks in specific complex scenarios, we propose a hard case mining process for targeted performance improvement. We firstly develop a eval engine for various types. We created substantial evaluation data with precisely labeled data obtained through manual annotation. These evaluation datasets are categorized into several types: text data includes 23 categories such as Chinese, English, printed, handwritten, Japanese, Latin, and emojis; table data includes 20 categories such as limited tables, unlimited tables, handwritten tables, checklists, invoices, and rotated tables; formula data includes 4 categories such as Chinese and English formulas, handwritten and printed, simple, and complex; chart data includes 11 categories such as Chinese and English charts, line charts, and bar charts, sourced from diverse origins to cover different document. By inference on this evaluation set and using corresponding professional metrics (e.g., EditDist for Text, TEDs [41] for Tables, RMS- F1 [42] for Charts, and BLEU [43] for Formulas), we can accurately identify hard cases where the model performs poorly. Finally, for these identified weaknesses, the system utilizes a rich set of resources (such as Font Library, CSS Library, Corpus) and rendering tools (like XeLaTeX and web browsers) to synthetically generate a large volume of new, high- quality hard cases. ## 4. Evaluation To thoroughly assess the effectiveness of PaddleOCR- VL, we compared it against leading general vision language models and specialized document parsing models across multiple public benchmarks and in- house benchmarks. We conducted comprehensive performance comparisons in two aspects: page- level document parsing and element- level recognition, which are detailed in Sections 4.1 and 4.2. Page- level involves analyzing entire pages of a document to parsing their overall content, structure and layout, while element- level is dedicated exclusively <--- Page Split ---> to assessing the recognition of specific elements, such as text, tables, formulas, and charts, within the document. ### 4.1. Page-level Evaluation This section details the evaluation of end- to- end document parsing capabilities using the following three benchmarks, aiming to measure its overall performance in real- world document scenarios. OmniDocBench v1.5 To comprehensively evaluate the document parsing capabilities, we conducted extensive experiments on the OmniDocBench v1.5 [2] benchmark. It is an expansion of version v1.0, adding 374 new documents for a total of 1,355 document pages. It features a more balanced distribution of data in both Chinese and English, as well as a richer inclusion of formulas and other elements. The evaluation method has been updated, with formulas assessed using the CDM method. The overall metric is a weighted combination of the metrics for text, formulas, and tables. Table 2 demonstrate that PaddleOCR- VL achieves SOTA performance, outperforming existing pipeline tools, general VLMs, and other specialized document parsing models across all key metrics. Specifically, our model achieves a top- ranking overall score of 92.56, surpassing the next best model, MinerU2.5- 1.2B (90.67). Moreover, our model establishes new SOTA results in the sub- tasks, including the lowest Text- Edit distance [44] of 0.035, the highest Formula- CDM score of 91.43, the leading scores of 89.76 and 93.52 in Table- TEDS and Table- TEDS- S, and the best reading ordering scores of 0.043, respectively. These results underscore its superior accuracy in text recognition, formula recognition, and complex table structure analysis. Table 2 | Comprehensive evaluation of document parsing on OmniDocBench v1.5. Results are reported by OmniDocBench [16] unless Ours. <table><tr><td>Model Type</td><td>Methods</td><td>Parameters</td><td>Overall↑</td><td>TextEdit↓</td><td>FormulaCDM↑</td><td>TableTEDS↑</td><td>TableTEDS-S↑</td><td>Reading OrderEdit↓</td></tr><tr><td rowspan="3">Pipeline Tools</td><td>Marker-1.8.2 [45]</td><td>-</td><td>71.30</td><td>0.206</td><td>76.66</td><td>57.88</td><td>71.17</td><td>0.250</td></tr><tr><td>Minerva2-pipeline [14]</td><td>-</td><td>75.51</td><td>0.209</td><td>76.55</td><td>70.90</td><td>79.11</td><td>0.225</td></tr><tr><td>PP-StructureV3 [10]</td><td>-</td><td>86.73</td><td>0.073</td><td>85.79</td><td>81.68</td><td>89.48</td><td>0.073</td></tr><tr><td rowspan="5">General VLMs</td><td>GPT-4o [7]</td><td>-</td><td>75.02</td><td>0.217</td><td>79.70</td><td>67.07</td><td>76.09</td><td>0.148</td></tr><tr><td>InternVL3-76B [46]</td><td>76B</td><td>80.33</td><td>0.131</td><td>83.42</td><td>70.64</td><td>77.74</td><td>0.113</td></tr><tr><td>InternVL3.5-241B [47]</td><td>241B</td><td>82.67</td><td>0.142</td><td>87.23</td><td>75.00</td><td>81.28</td><td>0.125</td></tr><tr><td>Qwen2.5-VL-72B [24]</td><td>72B</td><td>87.02</td><td>0.094</td><td>88.27</td><td>82.15</td><td>86.22</td><td>0.102</td></tr><tr><td>Gemini-2.5 Pro [48]</td><td>-</td><td>88.03</td><td>0.075</td><td>85.82</td><td>85.71</td><td>90.29</td><td>0.097</td></tr><tr><td rowspan="10">Specialized VLMs</td><td>Dolphin [3]</td><td>322M</td><td>74.67</td><td>0.125</td><td>67.85</td><td>68.70</td><td>77.77</td><td>0.124</td></tr><tr><td>OCRFlux-3B [49]</td><td>3B</td><td>74.82</td><td>0.193</td><td>68.03</td><td>75.75</td><td>80.23</td><td>0.202</td></tr><tr><td>Mistral OCR [50]</td><td>-</td><td>78.83</td><td>0.164</td><td>82.84</td><td>70.03</td><td>78.04</td><td>0.144</td></tr><tr><td>POINTS-Reader [4]</td><td>3B</td><td>80.98</td><td>0.134</td><td>79.20</td><td>77.13</td><td>81.66</td><td>0.145</td></tr><tr><td>olmoOCR-7B [12]</td><td>7B</td><td>81.79</td><td>0.096</td><td>86.04</td><td>68.92</td><td>74.77</td><td>0.121</td></tr><tr><td>MinerU2-VLM [14]</td><td>0.9B</td><td>85.56</td><td>0.078</td><td>80.95</td><td>83.54</td><td>87.66</td><td>0.086</td></tr><tr><td>Nanonets-OCR-s [51]</td><td>3B</td><td>85.59</td><td>0.093</td><td>85.90</td><td>80.14</td><td>85.57</td><td>0.108</td></tr><tr><td>MonkeyOCR-pro-1.2B [1]</td><td>1.9B</td><td>86.96</td><td>0.084</td><td>85.02</td><td>84.24</td><td>89.02</td><td>0.130</td></tr><tr><td>MonkeyOCR-3B [1]</td><td>3.7B</td><td>87.13</td><td>0.075</td><td>87.45</td><td>81.39</td><td>85.92</td><td>0.129</td></tr><tr><td>dots.ocr [52]</td><td>3B</td><td>88.41</td><td>0.048</td><td>83.22</td><td>86.78</td><td>90.62</td><td>0.053</td></tr><tr><td>MonkeyOCR-pro-3B [1]</td><td>3.7B</td><td>88.85</td><td>0.075</td><td>87.25</td><td>86.78</td><td>90.63</td><td>0.128</td></tr><tr><td>MinerU2.5 [2]</td><td>1.2B</td><td>90.67</td><td>0.047</td><td>88.46</td><td>88.22</td><td>92.38</td><td>0.044</td></tr><tr><td>PaddleOCR-VL</td><td>0.9B</td><td>92.56</td><td>0.035</td><td>91.43</td><td>89.76</td><td>93.52</td><td>0.043</td></tr></table> OmniDocBench v1.0 A publicly available benchmark dataset specifically is designed to evaluate real- world document parsing capabilities. It comprises 981 PDF pages, spanning 9 distinct <--- Page Split ---> document types, 4 layout styles, and 3 language categories. Based on the experimental results presented in Table 3, PaddleOCR- VL demonstrates superior performance with an average overall edit distance of 0.115, demonstrating its superior capability in document parsing. The model excels in formula edit distance (0.241 EN, 0.316 ZH), and achieves the SOTA performance (0.062) and a comparable SOTA performance (0.041) for Chinese and English text edit distance respectively, showcasing its accuracy in handling textual and formulaic data. Although the model exhibits slightly lower performance in the English Table TEDS (88.0), this can be largely attributed to typo- related annotation errors in OmniDocBench v1.0. Nevertheless, it demonstrates a clear advantage in the Chinese Table TEDS (92.14). Regarding the reading order edit distance, the model achieves the best performance in Chinese (0.063) and a comparable SOTA result in English (0.045), emphasizing its capability to maintain structural integrity and logical document flow. Table 3 | Comprehensive evaluation of document parsing on OmniDocBench v1.0. Results are reported by OmniDocBench [16] unless MinerU2.5 and Ours. <table><tr><td rowspan="2">Method Type</td><td rowspan="2">Methods</td><td rowspan="2">AvgOverallEdit</td><td colspan="2">OverallEdit</td><td colspan="2">TextEdit</td><td colspan="2">FormulaEdit</td><td colspan="2">TableTEDST</td><td colspan="2">TableEdit</td><td colspan="2">Reading OrderEdit</td></tr><tr><td>EN</td><td>ZH</td><td>EN</td><td>ZH</td><td>EN</td><td>ZH</td><td>EN</td><td>ZH</td><td></td><td></td><td></td><td></td></tr><tr><td rowspan="7">Pipeline Tools</td><td>Docking-2.14.0 [11]</td><td>0.749</td><td>0.589</td><td>0.909</td><td>0.416</td><td>0.987</td><td>0.999</td><td>1</td><td>61.3</td><td>25.0</td><td>0.627</td><td>0.810</td><td>0.313</td><td>0.837</td></tr><tr><td>OpenParse-0.7.0 [53]</td><td>0.730</td><td>0.646</td><td>0.814</td><td>0.681</td><td>0.974</td><td>0.996</td><td>1</td><td>64.8</td><td>27.5</td><td>0.284</td><td>0.639</td><td>0.595</td><td>0.641</td></tr><tr><td>Unstructured-0.17.2 [54]</td><td>0.651</td><td>0.586</td><td>0.716</td><td>0.198</td><td>0.481</td><td>0.999</td><td>1</td><td>0</td><td>0.1</td><td>1</td><td>0.998</td><td>0.145</td><td>0.387</td></tr><tr><td>Pix2Text-1.1.2.3 [55]</td><td>0.424</td><td>0.320</td><td>0.528</td><td>0.138</td><td>0.356</td><td>0.276</td><td>0.611</td><td>73.6</td><td>66.2</td><td>0.584</td><td>0.645</td><td>0.281</td><td>0.499</td></tr><tr><td>Marker-1.7.1 [45]</td><td>0.397</td><td>0.296</td><td>0.497</td><td>0.085</td><td>0.293</td><td>0.374</td><td>0.688</td><td>67.6</td><td>54.0</td><td>0.609</td><td>0.678</td><td>0.116</td><td>0.329</td></tr><tr><td>Mathpix [56]</td><td>0.278</td><td>0.191</td><td>0.364</td><td>0.105</td><td>0.381</td><td>0.306</td><td>0.454</td><td>77.0</td><td>67.1</td><td>0.243</td><td>0.320</td><td>0.108</td><td>0.304</td></tr><tr><td>MinerU-pipeline [9]</td><td>0.203</td><td>0.162</td><td>0.244</td><td>0.072</td><td>0.111</td><td>0.313</td><td>0.581</td><td>77.4</td><td>79.5</td><td>0.166</td><td>0.150</td><td>0.097</td><td>0.136</td></tr><tr><td>PP-StructureV3 [10]</td><td>0.176</td><td>0.145</td><td>0.206</td><td>0.058</td><td>0.088</td><td>0.295</td><td>0.535</td><td>77.2</td><td>83.9</td><td>0.159</td><td>0.109</td><td>0.069</td><td>0.091</td></tr><tr><td rowspan="4">General VLMs</td><td>InternVL2-76B [57]</td><td>0.442</td><td>0.440</td><td>0.443</td><td>0.353</td><td>0.290</td><td>0.543</td><td>0.701</td><td>63.0</td><td>60.2</td><td>0.547</td><td>0.555</td><td>0.317</td><td>0.228</td></tr><tr><td>GPT-4o [7]</td><td>0.316</td><td>0.233</td><td>0.399</td><td>0.144</td><td>0.409</td><td>0.425</td><td>0.606</td><td>72.0</td><td>62.9</td><td>0.234</td><td>0.329</td><td>0.128</td><td>0.251</td></tr><tr><td>InternVL3-78B [46]</td><td>0.257</td><td>0.218</td><td>0.296</td><td>0.117</td><td>0.210</td><td>0.380</td><td>0.533</td><td>69.0</td><td>73.9</td><td>0.279</td><td>0.282</td><td>0.095</td><td>0.161</td></tr><tr><td>Qwen2.5-VL-72B [24]</td><td>0.238</td><td>0.214</td><td>0.261</td><td>0.092</td><td>0.180</td><td>0.315</td><td>0.434</td><td>81.4</td><td>83.0</td><td>0.341</td><td>0.262</td><td>0.106</td><td>0.168</td></tr><tr><td>Gemini2.5-Pro [48]</td><td>0.180</td><td>0.148</td><td>0.212</td><td>0.055</td><td>0.168</td><td>0.356</td><td>0.439</td><td>85.8</td><td>86.4</td><td>0.130</td><td>0.119</td><td>0.049</td><td>0.121</td></tr><tr><td rowspan="9">Specialized VLMs</td><td>Nougat [58]</td><td>0.713</td><td>0.452</td><td>0.973</td><td>0.365</td><td>0.998</td><td>0.488</td><td>0.941</td><td>39.9</td><td>0.0</td><td>0.572</td><td>1</td><td>0.382</td><td>0.954</td></tr><tr><td>SmolDocking-256M [13]</td><td>0.655</td><td>0.493</td><td>0.816</td><td>0.262</td><td>0.838</td><td>0.753</td><td>0.997</td><td>44.9</td><td>16.5</td><td>0.729</td><td>0.907</td><td>0.227</td><td>0.522</td></tr><tr><td>olmoCR-7B [12]</td><td>0.398</td><td>0.326</td><td>0.469</td><td>0.097</td><td>0.293</td><td>0.455</td><td>0.655</td><td>68.1</td><td>61.3</td><td>0.608</td><td>0.652</td><td>0.145</td><td>0.277</td></tr><tr><td>GOT [21]</td><td>0.349</td><td>0.287</td><td>0.411</td><td>0.189</td><td>0.315</td><td>0.360</td><td>0.528</td><td>53.2</td><td>47.2</td><td>0.459</td><td>0.520</td><td>0.141</td><td>0.280</td></tr><tr><td>OCRFlux-3B [49]</td><td>0.294</td><td>0.238</td><td>0.349</td><td>0.112</td><td>0.256</td><td>0.447</td><td>0.716</td><td>69.0</td><td>80.0</td><td>0.269</td><td>0.162</td><td>0.126</td><td>0.263</td></tr><tr><td>Nanonets-OCR-s [51]</td><td>0.289</td><td>0.283</td><td>0.295</td><td>0.134</td><td>0.231</td><td>0.518</td><td>0.546</td><td>76.8</td><td>79.4</td><td>0.343</td><td>0.201</td><td>0.135</td><td>0.200</td></tr><tr><td>Dolphin [3]</td><td>0.259</td><td>0.205</td><td>0.313</td><td>0.092</td><td>0.204</td><td>0.447</td><td>0.606</td><td>76.1</td><td>66.9</td><td>0.193</td><td>0.282</td><td>0.088</td><td>0.160</td></tr><tr><td>MinerU2-VLM [14]</td><td>0.186</td><td>0.133</td><td>0.238</td><td>0.045</td><td>0.115</td><td>0.273</td><td>0.506</td><td>82.1</td><td>83.4</td><td>0.150</td><td>0.209</td><td>0.066</td><td>0.122</td></tr><tr><td>MonkeyOCR-pro-1.2B [1]</td><td>0.184</td><td>0.146</td><td>0.221</td><td>0.068</td><td>0.118</td><td>0.272</td><td>0.452</td><td>81.3</td><td>85.5</td><td>0.149</td><td>0.134</td><td>0.093</td><td>0.179</td></tr><tr><td>MonkeyOCR-pro-3B [1]</td><td>0.172</td><td>0.138</td><td>0.206</td><td>0.067</td><td>0.107</td><td>0.246</td><td>0.421</td><td>81.5</td><td>87.5</td><td>0.139</td><td>0.111</td><td>0.100</td><td>0.185</td></tr><tr><td>dots.ocr [52]</td><td>0.143</td><td>0.125</td><td>0.160</td><td>0.032</td><td>0.066</td><td>0.329</td><td>0.416</td><td>88.6</td><td>89.0</td><td>0.099</td><td>0.092</td><td>0.040</td><td>0.067</td></tr><tr><td>MinerU2.5 [2]</td><td>0.143</td><td>0.111</td><td>0.174</td><td>0.050</td><td>0.074</td><td>0.258</td><td>0.473</td><td>88.3</td><td>89.2</td><td>0.089</td><td>0.083</td><td>0.045</td><td>0.068</td></tr><tr><td>PaddleOCR-VL</td><td>0.115</td><td>0.105</td><td>0.126</td><td>0.041</td><td>0.062</td><td>0.241</td><td>0.316</td><td>88.0</td><td>92.1</td><td>0.093</td><td>0.062</td><td>0.045</td><td>0.063</td></tr></table> Table 3 | Comprehensive evaluation of document parsing on OmniDocBench v1.0. Results are reported by OmniDocBench [16] unless MinerU2.5 and Ours. olmoCR- Bench olmoCR- Bench [12] includes 1,402 PDF documents and 7,010 test cases, addressing diverse document types and extraction challenges. It offers a detailed evaluation framework for PDF content extraction by assessing tools and models through simple, clear, and machine- verifiable unit tests. This approach avoids biased evaluations and soft metric comparisons, allowing for the detection of subtle but significant extraction errors. Table 4 highlights the outstanding performance of PaddleOCR- VL in the olmoCR- Bench evaluation, achieving the highest overall score of \(80.0 \pm 1.0\) . It excels in various categories, leading in ArXiv (85.7), Headers and Footers (97.0) and securing second place in Multi- column text (79.9), Long Tiny Text (85.7). These results highlight the proposed model's capability to effectively manage diverse document types, reinforcing its status as a top solution in document parsing and its reliability in complex OCR tasks. <--- Page Split ---> <table><tr><td rowspan="2">Methods</td><td colspan="9">Unit Test Pass Rate ↑</td></tr><tr><td>Overall</td><td>ArXiv</td><td>Old Scans Math</td><td>Tables</td><td>Old Scans</td><td>Headers and Footers</td><td>Multi column</td><td>Long Tiny Text</td><td>Base</td></tr><tr><td>GOT [21]</td><td>48.3 ± 1.1</td><td>52.7</td><td>52.0</td><td>0.2</td><td>22.1</td><td>93.6</td><td>42.0</td><td>29.9</td><td>94.0</td></tr><tr><td>Gemini Flash 2 (No Anchor) [48]</td><td>57.8 ± 1.1</td><td>32.1</td><td>56.3</td><td>61.4</td><td>27.8</td><td>48.0</td><td>58.7</td><td>84.4</td><td>94.0</td></tr><tr><td>MinerU-pipeline [9]</td><td>61.5 ± 1.1</td><td>75.4</td><td>47.4</td><td>60.9</td><td>17.3</td><td>96.6</td><td>59.0</td><td>39.1</td><td>96.6</td></tr><tr><td>Gemini Flash 2 (Anchored) [48]</td><td>63.8 ± 1.2</td><td>54.5</td><td>56.1</td><td>72.1</td><td>34.2</td><td>64.7</td><td>61.5</td><td>71.5</td><td>95.6</td></tr><tr><td>Nanonets-OCR-s [51]</td><td>64.5 ± 1.1</td><td>67.0</td><td>68.6</td><td>77.7</td><td>39.5</td><td>40.7</td><td>69.9</td><td>53.4</td><td>99.3</td></tr><tr><td>Qwen2.5-VL-7B (No Anchor) [24]</td><td>65.5 ± 1.2</td><td>63.1</td><td>65.7</td><td>67.3</td><td>38.6</td><td>73.6</td><td>68.3</td><td>49.1</td><td>98.3</td></tr><tr><td>GPT-4o (No Anchor) [7]</td><td>68.9 ± 1.1</td><td>51.5</td><td>75.5</td><td>69.1</td><td>40.9</td><td>94.2</td><td>68.9</td><td>54.1</td><td>96.7</td></tr><tr><td>GPT-4o (Anchored) [7]</td><td>69.9 ± 1.1</td><td>53.5</td><td>74.5</td><td>70.0</td><td>40.7</td><td>93.8</td><td>69.3</td><td>60.6</td><td>96.8</td></tr><tr><td>Marker-1.8.2 [45]</td><td>70.1 ± 1.1</td><td>76.0</td><td>57.9</td><td>57.6</td><td>27.8</td><td>84.9</td><td>72.9</td><td>84.6</td><td>99.1</td></tr><tr><td>olmoOCR v0.1.75 (No Anchor) [12]</td><td>74.7 ± 1.1</td><td>71.5</td><td>71.4</td><td>71.4</td><td>42.8</td><td>94.1</td><td>77.7</td><td>71.0</td><td>97.8</td></tr><tr><td>olmoOCR v0.1.75 (Anchored) [12]</td><td>75.5 ± 1.0</td><td>74.9</td><td>71.2</td><td>71.0</td><td>42.2</td><td>94.5</td><td>78.3</td><td>73.3</td><td>98.3</td></tr><tr><td>MonkeyOCR-pro-3B [1]</td><td>75.8 ± 1.0</td><td>83.8</td><td>68.8</td><td>74.6</td><td>36.1</td><td>91.2</td><td>76.6</td><td>80.1</td><td>95.3</td></tr><tr><td>MinerU2.5 [2]</td><td>77.5 ± 1.0</td><td>81.1</td><td>74.0</td><td>85.1</td><td>33.8</td><td>96.3</td><td>65.5</td><td>89.8</td><td>94.4</td></tr><tr><td>dots.ocr [52]</td><td>79.1 ± 1.0</td><td>82.1</td><td>64.2</td><td>88.3</td><td>40.9</td><td>94.1</td><td>82.4</td><td>81.2</td><td>99.5</td></tr><tr><td>PaddleOCR-VL</td><td>80.0 ± 1.0</td><td>85.7</td><td>71.0</td><td>84.1</td><td>37.8</td><td>97.0</td><td>79.9</td><td>85.7</td><td>98.5</td></tr></table> Table 4 | Comprehensive evaluation of document parsing on olmoOCR-Bench. Results are reported by olmoOCR-Bench [12] unless MinerU2.5 and Ours. # 4.2. Element-level Evaluation This section centers on evaluating the element-level capabilities of PaddleOCR VL 0.9B. We thoroughly assessed four tasks: text, tables, formulas, and charts using both public competition data and in-house data. # 4.2.1. Text Recognition For text recognition, we utilize three benchmarks to validate the effectiveness of models based on the edit distance metric. **OmniDocBench-OCR-block:** From the 1355 images of OmniDocBench v1.5, we extracted all text-related sub-images based on layout detection labels, removing any with null annotations.This process resulted in a total of 17,148 block-level images. This evaluation set is named OmniDocBench-OCR-block, with the ground truth still sourced from OmniDocBench. This evaluation set can more accurately assess the model's text recognition performance on without being affected by layout detection. We use the average normalized edit distance for evaluation. In Table 5, we present a comprehensive comparison of performance across various document types using different models. Our model, PaddleOCR-VL, consistently demonstrates superior performance, achieving the lowest error rates in almost all categories. Specifically, PaddleOCR-VL achieves the best results in the PPT2PDF (0.049), Academic Literature (0.021), Book (0.045),Colorful Textbook (0.081), Exam Paper (0.115), Magazine (0.020), Newspaper (0.034), Note (0.081), and Research Report (0.033) categories. These results highlight PaddleOCR-VL's robust and versatile capability in handling diverse document types, establishing it as the leading method in the OmniDocBench-OCR-block performance evaluation. <--- Page Split ---> Table 5 | Overall Comparison of OmniDocBench-OCR-block Performance. <table><tr><td>Methods</td><td>PPT2PDF</td><td>Academic Literature</td><td>Book</td><td>Colorful Textbook</td><td>Edit Distance ↓ Exam Paper</td><td>Magazine</td><td>Newspaper</td><td>Note</td><td>Research Report</td></tr><tr><td>Qwen2.5-VL-72B [24]</td><td>0.054</td><td>0.023</td><td>0.061</td><td>0.084</td><td>0.195</td><td>0.032</td><td>0.056</td><td>0.118</td><td>0.040</td></tr><tr><td>MonkeyOCR-pro-3B [1]</td><td>0.058</td><td>0.021</td><td>0.064</td><td>0.096</td><td>0.116</td><td>0.023</td><td>0.058</td><td>0.124</td><td>0.052</td></tr><tr><td>MinerU2.5 [2]</td><td>0.195</td><td>0.089</td><td>0.111</td><td>0.234</td><td>0.194</td><td>0.147</td><td>0.056</td><td>0.142</td><td>0.094</td></tr><tr><td>Dolphin [3]</td><td>0.237</td><td>0.095</td><td>0.135</td><td>0.347</td><td>0.248</td><td>0.233</td><td>0.121</td><td>0.309</td><td>0.213</td></tr><tr><td>PaddleOCR-VL</td><td>0.049</td><td>0.021</td><td>0.045</td><td>0.081</td><td>0.115</td><td>0.020</td><td>0.034</td><td>0.081</td><td>0.033</td></tr></table> In- house- OCR: This is our self- built line- level text evaluation dataset which contains 107452 samples with high- quality labels. The dataset includes various text types such as handwritten Chinese, handwritten English, printed Chinese, printed English, traditional Chinese, ancient texts, general scenarios, Pinyin, obscure characters, vertical text, single characters, emojis, and artistic fonts. It also comprises evaluation sets for 109 languages, such as Latin and Japanese. Table 6 provides a detailed evaluation of performance across multiple languages and text types. In the Multilingual Metrics (Table 6a), the model demonstrates outstanding accuracy with the lowest edit distances in all evaluated scripts: Arabic(0.122), Korean(0.052), Tamil(0.043), Greek(0.135), Thai(0.081), Telugu (0.114), Devanagari (0.097), Cyrillic (0.109), Latin (0.013), and Japanese (0.096), indicating superior capability in handling diverse languages. Similarly, in the Text Type Metrics (Table 6b), it excels in various text types, achieving the lowest error rates in categories like Handwritten CN (0.089), Handwritten EN (0.042), Printed CN (0.035), Printed EN (0.016), Traditional Chinese (0.048), Ancient Texts(0.198), General Scene (0.067), Pinyin (0.113), Rare Characters (0.001), Vertical Text (0.005), Single Characters (0.027), Emoji (0.057), and Art Font (0.165). These impressive results underscore the model's robust performance and versatility, establishing it as the leading OCR solution in this benchmark comparison. (a) Multilingual Metrics. <table><tr><td rowspan="2">Methods</td><td colspan="10">Edit Distance ↓</td></tr><tr><td>Arabic</td><td>Korean</td><td>Tamil</td><td>Greek</td><td>Thai</td><td>Telugu</td><td>Devanagari</td><td>Cyrillic</td><td>Latin</td><td>Japanese</td></tr><tr><td>Qwen2.5-VL-72B [24]</td><td>0.405</td><td>0.056</td><td>0.389</td><td>0.165</td><td>0.194</td><td>0.758</td><td>0.164</td><td>0.220</td><td>0.021</td><td>0.181</td></tr><tr><td>Dolphin [3]</td><td>0.682</td><td>0.699</td><td>0.912</td><td>0.691</td><td>0.709</td><td>0.832</td><td>0.818</td><td>0.549</td><td>0.037</td><td>0.309</td></tr><tr><td>MonkeyOCR-pro-3B [1]</td><td>0.601</td><td>0.182</td><td>0.921</td><td>0.449</td><td>0.876</td><td>0.909</td><td>0.896</td><td>0.387</td><td>0.036</td><td>0.262</td></tr><tr><td>MinerU2.5 [2]</td><td>0.978</td><td>0.917</td><td>0.957</td><td>0.661</td><td>0.880</td><td>0.937</td><td>0.915</td><td>0.832</td><td>0.063</td><td>0.588</td></tr><tr><td>PaddleOCR-VL</td><td>0.122</td><td>0.052</td><td>0.043</td><td>0.135</td><td>0.081</td><td>0.011</td><td>0.097</td><td>0.109</td><td>0.013</td><td>0.086</td></tr></table> (b) Text Type Metrics. <table><tr><td rowspan="2">Methods</td><td colspan="10">Edit Distance ↓</td><td></td></tr><tr><td>Handwritten CN</td><td>Handwritten EN</td><td>Printed CN</td><td>Printed EN</td><td>Trad. Chinese</td><td>Ancient Texts</td><td>General Scene</td><td>Pinyin</td><td>Rare Char.</td><td>Vertical Text</td><td>Single Char.</td><td>Emoji</td><td>Art Font</td></tr><tr><td>Dolphin [3]</td><td>0.236</td><td>0.145</td><td>0.074</td><td>0.025</td><td>0.095</td><td>0.218</td><td>0.113</td><td>0.183</td><td>0.092</td><td>0.190</td><td>0.202</td><td>0.225</td><td>0.230</td></tr><tr><td>MonkeyOCR-pro-3B [1]</td><td>0.253</td><td>0.071</td><td>0.048</td><td>0.023</td><td>0.295</td><td>0.529</td><td>0.144</td><td>0.165</td><td>0.063</td><td>0.086</td><td>0.110</td><td>0.184</td><td>0.263</td></tr><tr><td>Qwen2.5-VL-72B [24]</td><td>0.188</td><td>0.047</td><td>0.037</td><td>0.018</td><td>0.100</td><td>0.387</td><td>0.122</td><td>0.186</td><td>0.034</td><td>0.090</td><td>0.041</td><td>0.134</td><td>0.220</td></tr><tr><td>MinerU2.5 [2]</td><td>0.370</td><td>0.088</td><td>0.041</td><td>0.023</td><td>0.232</td><td>0.950</td><td>0.179</td><td>0.256</td><td>0.048</td><td>0.962</td><td>0.097</td><td>0.174</td><td>0.337</td></tr><tr><td>PaddleOCR-VL</td><td>0.089</td><td>0.042</td><td>0.035</td><td>0.016</td><td>0.048</td><td>0.198</td><td>0.067</td><td>0.113</td><td>0.001</td><td>0.005</td><td>0.027</td><td>0.057</td><td>0.165</td></tr></table> Ocean- OCR- Handwritten: This is a line and paragraph levels handwritten evaluation dataset designed for comprehensive handwriting recognition assessment. It contains 400 samples, evenly divided into four subsets of 100 images each. The dataset covers both real and synthetic <--- Page Split ---> handwriting in Chinese and English. Real samples are collected from established handwriting datasets such as CASIA-HWDB [29], GNHK [59], and BRUSH [60], while synthetic samples are generated to simulate diverse writing styles, character densities, and layouts. The benchmark aims to provide balanced and fine-grained evaluation for handwritten text recognition across different scripts and writing conditions. Table 7 presents a comparison of OCR performance for handwritten English and Chinese text on the Ocean- OCR- Bench. Our model demonstrates superior performance across all metrics in both languages. For English, it achieves the best edit distance of 0.118 and excels in F1- score, Precision, Recall, BLEU, and METEOR, establishing itself as the leading model. In Chinese, PaddleOCR- VL sets a benchmark with an edit distance of 0.034 and leads in all other metrics, showcasing its outstanding precision and reliability. <table><tr><td rowspan="2">Methods</td><td colspan="2">Edit Distance ↓</td><td colspan="2">F1-score ↑</td><td colspan="2">Precision↑</td><td colspan="2">Recall↑</td><td colspan="2">BLEU↑</td><td colspan="2">METEOR↑</td></tr><tr><td>EN</td><td>ZH</td><td>EN</td><td>ZH</td><td>EN</td><td>ZH</td><td>EN</td><td>ZH</td><td></td><td></td><td></td><td></td></tr><tr><td>InternVL2.5-4B [57]</td><td>0.197</td><td>0.240</td><td>0.661</td><td>0.741</td><td>0.674</td><td>0.754</td><td>0.655</td><td>0.734</td><td>0.406</td><td>0.473</td><td>0.652</td><td>0.687</td></tr><tr><td>MiniCPM-V2.6-8B [61]</td><td>0.147</td><td>0.175</td><td>0.727</td><td>0.810</td><td>0.747</td><td>0.811</td><td>0.714</td><td>0.812</td><td>0.443</td><td>0.583</td><td>0.727</td><td>0.774</td></tr><tr><td>Qwen2-VL-7B [62]</td><td>0.127</td><td>0.113</td><td>0.760</td><td>0.881</td><td>0.773</td><td>0.884</td><td>0.754</td><td>0.884</td><td>0.490</td><td>0.666</td><td>0.756</td><td>0.859</td></tr><tr><td>GOT [21]</td><td>0.616</td><td>0.402</td><td>0.283</td><td>0.568</td><td>0.309</td><td>0.618</td><td>0.273</td><td>0.544</td><td>0.151</td><td>0.295</td><td>0.255</td><td>0.492</td></tr><tr><td>PaddleOCR [10]</td><td>0.418</td><td>0.325</td><td>0.237</td><td>0.664</td><td>0.232</td><td>0.646</td><td>0.263</td><td>0.700</td><td>0.069</td><td>0.431</td><td>0.236</td><td>0.648</td></tr><tr><td>TextIn</td><td>0.358</td><td>0.180</td><td>0.362</td><td>0.840</td><td>0.368</td><td>0.869</td><td>0.362</td><td>0.822</td><td>0.098</td><td>0.567</td><td>0.337</td><td>0.751</td></tr><tr><td>Ocean-OCR [63]</td><td>0.145</td><td>0.106</td><td>0.774</td><td>0.885</td><td>0.780</td><td>0.912</td><td>0.782</td><td>0.862</td><td>0.532</td><td>0.736</td><td>0.772</td><td>0.885</td></tr><tr><td>MinerU2.5 [2]</td><td>0.238</td><td>0.356</td><td>0.558</td><td>0.619</td><td>0.547</td><td>0.623</td><td>0.574</td><td>0.622</td><td>0.344</td><td>0.489</td><td>0.553</td><td>0.601</td></tr><tr><td>PaddleOCR-VL</td><td>0.118</td><td>0.034</td><td>0.750</td><td>0.957</td><td>0.748</td><td>0.959</td><td>0.753</td><td>0.957</td><td>0.551</td><td>0.856</td><td>0.787</td><td>0.936</td></tr></table> Table 7 | Comparison of performance on English(EN) and Chinese(ZH) OCR for handwritten recognition on Ocean- OCR- Bench. Results are reported by Ocean- OCR [63] unless MinerU2.5 and Ours. #### 4.2.2. Table Recognition. For table recognition, we utilize two benchmarks to validate the effectiveness of PaddleOCR- VL- 0.9B based on TEDS [41] and Edit Distance. OmniDocBench- Table- block: To evaluate the table recognition performance of PaddleOCR- VL, we crop 512 tables from OmniDocBench v1.5 datasets. As shown in Table 8, our PaddleOCR- VL leads in the OmniDocBench- Table- block benchmark, surpassing all competitors. It achieves a top overall TEDS of 0.9195, reflecting high accuracy in capturing table structure and content. Its structural TEDS of 0.9543 highlights its ability to parse complex structures, while the lowest Overall Edit Distance of 0.0561 indicates minimal recognition errors. These results confirm PaddleOCR- VL's superior performance and establish it as the benchmark for accurate table recognition. Table 8 | Comparison of OmniDocBench-Table-block Performance <table><tr><td>Methods</td><td>Overall TEDS↑</td><td>Structural TEDS↑</td><td>Overall Edit Dist↓</td></tr><tr><td>MinerU2-VLM [14]</td><td>0.9002</td><td>0.9369</td><td>0.0734</td></tr><tr><td>Seed1.6</td><td>0.9079</td><td>0.9489</td><td>0.0652</td></tr><tr><td>dots.ocr [52]</td><td>0.8194</td><td>0.8442</td><td>0.1508</td></tr><tr><td>MinerU2.5 [2]</td><td>0.9005</td><td>0.9539</td><td>0.0693</td></tr><tr><td>PaddleOCR-VL</td><td>0.9195</td><td>0.9543</td><td>0.0561</td></tr></table> <--- Page Split ---> In- house- Table: Our self- built evaluation set contains diverse array of table images with comprehensive annotations and type classifications. It includes 20 different table types such as Chinese, English, mixed Chinese- English, and tables with various characteristics like full, partial, or no borders. The collection also covers tables with formulas, dense data, book/manual formats, lists, academic papers, merged cells, as well as low- quality, watermarked, registration forms, statistical forms, research reports, financial reports, images, invoices, and handwritten tables, among others. Table 9 provides a comparison of different methods on the In- house- Table task, highlighting their performance across various metrics. We achieves the highest scores in Overall TEDS (0.8699), Structural TEDS (0.9066), Overall Edit Distance (0.9066) and Structural Edit Distance (0.9339). These results underscore PaddleOCR- VL's effectiveness and reliability in table recognition tasks. Table 9 | Comparison of In-house-Table Performance <table><tr><td>Methods</td><td>Overall TEDS↑</td><td>Structural TEDS↑</td><td>Overall Edit Dist↑</td><td>Structural Edit Dist↑</td></tr><tr><td>MinerU2-VLM [14]</td><td>0.8286</td><td>0.8730</td><td>0.8757</td><td>0.9088</td></tr><tr><td>MonkeyOCR [1]</td><td>0.7396</td><td>0.7824</td><td>0.8174</td><td>0.8537</td></tr><tr><td>Nanonets-OCR-s [51]</td><td>0.7824</td><td>0.8190</td><td>0.8377</td><td>0.8692</td></tr><tr><td>OCRFlux-3B [49]</td><td>0.7741</td><td>0.8071</td><td>0.8238</td><td>0.8617</td></tr><tr><td>Qwen2.5-VL-3B [24]</td><td>0.7398</td><td>0.7765</td><td>0.8132</td><td>0.8701</td></tr><tr><td>Qwen2.5-VL-7B [24]</td><td>0.7549</td><td>0.7926</td><td>0.8251</td><td>0.8819</td></tr><tr><td>Qwen2.5-VL-72B [24]</td><td>0.7762</td><td>0.8361</td><td>0.843</td><td>0.8987</td></tr><tr><td>dots.ocr [52]</td><td>0.7547</td><td>0.7914</td><td>0.8047</td><td>0.8361</td></tr><tr><td>MinerU2.5 [2]</td><td>0.8469</td><td>0.8955</td><td>0.8896</td><td>0.9239</td></tr><tr><td>PaddleOCR-VL</td><td>0.8699</td><td>0.9066</td><td>0.9066</td><td>0.9339</td></tr></table> #### 4.2.3. Formula Recognition. For formula recognition, we validate the effectiveness our model based on the Character Detection Matching (CDM) [64] metric on OmniDocBench- Formula- block and In- house- Formula datasets. OmniDocBench- Formula- block Using the formula bounding boxes from OmniDocBench v1.5, 1050 formula sub- images were cropped. This step was taken to minimize the influence of layout detection on formula recognition. As shown in Table 10, the model achieved state- of- the- art CDM score of 0.9453. <table><tr><td>Methods</td><td>Overall CDM ↑</td><td>EN CDM ↑</td><td>ZH CDM ↑</td></tr><tr><td>dots.ocr [52]</td><td>0.4641</td><td>0.4868</td><td>0.4414</td></tr><tr><td>MinerU2-VLM [14]</td><td>0.8286</td><td>0.9616</td><td>0.6956</td></tr><tr><td>MonkeyOCR-pro-1.2B [1]</td><td>0.8531</td><td>0.9642</td><td>0.7419</td></tr><tr><td>MonkeyOCR-3B [1]</td><td>0.8621</td><td>0.9718</td><td>0.7524</td></tr><tr><td>Qwen2.5-VL-72B [24]</td><td>0.8747</td><td>0.9574</td><td>0.7920</td></tr><tr><td>MinerU2.5 [2]</td><td>0.9187</td><td>0.9751</td><td>0.8623</td></tr><tr><td>PaddleOCR-VL</td><td>0.9433</td><td>0.9677</td><td>0.9228</td></tr></table> Table 10 | Comparison of OmniDocBench v1.5 Formula- block Performance. Due to dots.ocr [52] easily recognizing cropped formulas as images, the score is relatively low. In- house- Formula: The self- constructed formula evaluation set contains 34,816 samples, covering common formula recognition scenarios such as academic papers, mathematics books, and primary and secondary school exam papers. Among them, there are 498 Chinese formulas and 34,318 English formulas. As shown in Table 11, our model obtains the best performance of <--- Page Split ---> 0.9882 CDM score on the In-house- Formula dataset. These results collectively demonstrate the powerful recognition capability of PaddleOCR-VL in real-world complex formula scenarios. Table 11 | Comparison of In-house-Formula Performance. Due to dots.ocr [52] easily recognizing cropped formulas as images, the score is relatively low. <table><tr><td>Methods</td><td>Overall CDM ↑</td><td>EN CDM ↑</td><td>ZH CDM ↑</td></tr><tr><td>dots.ocr [52]</td><td>0.6737</td><td>0.8066</td><td>0.5408</td></tr><tr><td>MinerU2-VLM [14]</td><td>0.9237</td><td>0.9764</td><td>0.8709</td></tr><tr><td>MonkeyOCR-pro-1.2B [1]</td><td>0.9537</td><td>0.9656</td><td>0.9417</td></tr><tr><td>MonkeyOCR-3B [1]</td><td>0.9566</td><td>0.9761</td><td>0.9371</td></tr><tr><td>Qwen2.5-VL-72B [24]</td><td>0.9412</td><td>0.9519</td><td>0.9304</td></tr><tr><td>MinerU2.5 [2]</td><td>0.9770</td><td>0.9832</td><td>0.9708</td></tr><tr><td>PaddleOCR-VL</td><td>0.9882</td><td>0.9914</td><td>0.9849</td></tr></table> #### 4.2.4. Chart Recognition. For chart recognition, considering the limitations in dataset size, the imbalanced distribution of chart categories, and the poor annotation quality of publicly available test sets, we only utilize a in- house benchmark to validate the effectiveness of PaddleOCR- VL- 0.9B based on the RMS- F1 [42] score metric. As shown in Table 12, the proposed PaddleOCR- VL not only outperforms expert OCR VLMs but also surpasses some 72B- level multimodal language models. In- house- Chart: This in- house chart recognition evaluation set comprises 1,801 samples, all of which have underwent a rigorous manual review to ensure annotation correctness. The evaluation set is broadly categorized into 11 chart categories, including bar- line hybrid, pie, \(100\%\) stacked bar, area, bar, bubble, histogram, line, scatterplot, stacked area, and stacked bar. Of these samples, 851 are in English and 950 are in Chinese. Prior to evaluation, both the predicted data table and the ground truth data table are normalized to a uniform markdown format to eliminate expression ambiguities. Table 12 | Comparison of In-house-Chart Performance <table><tr><td rowspan="2">Methods</td><td colspan="3">RMS-F1 ↑</td></tr><tr><td>Overall</td><td>EN</td><td>ZH</td></tr><tr><td>TinyChart [65]</td><td>0.2159</td><td>0.4726</td><td>0.0876</td></tr><tr><td>GOT [21]</td><td>0.3160</td><td>0.1100</td><td>0.4190</td></tr><tr><td>OneChart [66]</td><td>0.3716</td><td>0.1384</td><td>0.4882</td></tr><tr><td>Qwen2.5-VL-3B [24]</td><td>0.5942</td><td>0.5619</td><td>0.6103</td></tr><tr><td>Qwen2.5-VL-7B [24]</td><td>0.6821</td><td>0.5876</td><td>0.7293</td></tr><tr><td>Qwen2.5-VL-72B [24]</td><td>0.7300</td><td>0.6972</td><td>0.7464</td></tr><tr><td>PP-StructureV3 [10]</td><td>0.8060</td><td>0.7963</td><td>0.8109</td></tr><tr><td>PaddleOCR-VL</td><td>0.8440</td><td>0.8222</td><td>0.8549</td></tr></table> ### 4.3. Inference Performance To improve the inference performance of PaddleOCR- VL, we introduce multi- threading asynchronous execution into the inference workflow. The process is divided into three main stages—data loading (e.g., rendering PDF pages as images), layout model processing, and VLM inference—each running in a separate thread. Data is transferred between adjacent stages via queues, enabling concurrent execution for higher efficiency. In particular, for VLM inference, batch processing is only triggered when either the number of items in the queue reaches a predefined threshold or the waiting time for queued data exceeds a specified limit. This design allows blocks across different pages to be aggregated and processed together, thereby <--- Page Split ---> maximizing parallelism, especially when handling large volumes of files. We further deploy PaddleOCR-VL-0.9B on high-throughput inference and serving engines [67, 68, 69], tuning parameters like max- num- batched- tokens and gpu- memory- utilization to balance inference throughput with GPU memory consumption. We measured the end- to- end inference speed and GPU usage on the OmniDocBench v1.0 dataset, processing PDF files in batches of 512 on a single NVIDIA A100 GPU. By "end- to- end", we mean that the inference time was measured from providing the PDF file path as input to the complete generation of the Markdown text. For MonkeyOCR, dots.ocr, and MinerU, inference was run with the vLLM backend and the default configuration (including the KV cache settings). The generated Markdown text was tokenized with the "cl100k_base" tokenizer to compute the number of output tokens. For dots.ocr specifically, 200 threads were used for concurrent page processing, and the Base64- encoded image content in the produced Markdown text was replaced with a dummy path (UUID- based, prefixed with "images/" and suffixed with ".png") to ensure a reasonable token count. Table 13 provides a comprehensive comparison of inference efficiency across different methods. The proposed PaddleOCR- VL demonstrates clear and consistent advantages in both processing speed and memory efficiency. When deployed with the vLLM backend, it achieves \(15.8\%\) higher page throughput and \(14.2\%\) higher token throughput than the leading baseline, MinerU2.5, establishing itself as the most efficient solution overall. In addition, PaddleOCR- VL achieves notable memory savings, using roughly \(40\%\) less GPU memory than dots.ocr while sustaining significantly faster processing. These results collectively confirm that PaddleOCR- VL attains state- of- the- art inference efficiency through a balanced optimization of speed and memory usage, making it highly suitable for real- world, high- throughput document understanding scenarios. <table><tr><td>Methods</td><td>Total Time (s)↓</td><td>Pages/s↑</td><td>Tokens/s↑</td><td>Avg. VRAM Usage (GB)↓</td></tr><tr><td>MonkeyOCR-pro-1.2B† [1]</td><td>1456.4</td><td>0.6730</td><td>1120.3</td><td>75.5</td></tr><tr><td>dots.ocr† [52]</td><td>2784.6</td><td>0.3522</td><td>532.9</td><td>78.5</td></tr><tr><td>MinerU2.5† [2]</td><td>927.3</td><td>1.0574</td><td>1647.9</td><td>41.9</td></tr><tr><td>PaddleOCR-VL†</td><td>800.9</td><td>1.2241</td><td>1881.2</td><td>43.7</td></tr><tr><td>PaddleOCR-VL‡</td><td>917.6</td><td>1.0684</td><td>1641.5</td><td>49.8</td></tr></table> Table 13 | End- to- End Inference Performance Comparison. † denotes the vLLM backend, and ‡ denotes the SGLang backend. ## 5. Conclusion This report introduces PaddleOCR- VL, an advanced and efficient model for document parsing that excels at both element- level and page- level recognition. Its core components, PaddleOCR- VL- 0.9B, built with a NaViT- style visual encoder and ERNIE- 4.5- 0.3B language model, it accurately recognizes complex elements such as text, tables, formulas, and charts in over 100 languages. PaddleOCR- VL achieves fast inference and low resource consumption, making it practical for real- world deployment. It outperforms existing pipeline solutions on many benchmarks and effectively handles challenging content including handwriting and historical documents, as well as converting chart visuals into structured data. Its broad multilingual support and strong performance have the potential to advance the application and development of multimodal document processing technologies, bringing innovation to automated analysis and information retrieval. This will significantly enhance the performance and stability of RAG systems, making information extraction from complex documents more efficient, thereby providing more reliable data support for future AI applications. <--- Page Split ---> ## References [1] Zhang Li, Yuliang Liu, Qiang Liu, Zhiyin Ma, Ziyang Zhang, Shuo Zhang, Zidun Guo, Jiarui Zhang, Xinyu Wang, and Xiang Bai. Monkeyocr: Document parsing with a structure-recognition- relation triplet paradigm. arXiv preprint arXiv:2506.05218, 2025. [2] Junbo Niu, Zheng Liu, Zhuangcheng Gu, Bin Wang, Linke Ouyang, Zhiyuan Zhao, Tao Chu, Tianyao He, Fan Wu, Qintong Zhang, et al. Mineru2. 5: A decoupled vision- language model for efficient high- resolution document parsing. arXiv preprint arXiv:2509.22186, 2025. [3] Hao Feng, Shu Wei, Xiang Fei, Wei Shi, Yingdong Han, Lei Liao, Jinghui Lu, Binghong Wu, Qi Liu, Chunhui Lin, et al. Dolphin: Document image parsing via heterogeneous anchor prompting. arXiv preprint arXiv:2505.14059, 2025. [4] Yuan Liu, Zhongyin Zhao, Le Tian, Haicheng Wang, Xubing Ye, Yangxiu You, Zilin Yu, Chuhan Wu, Xiao Zhou, Yang Yu, et al. Points- reader: Distillation- free adaptation of vision- language models for document conversion. arXiv preprint arXiv:2509.01215, 2025. [5] Baidu- ERNIE- Team. Ernie 4.5 technical report, 2025. [6] An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengren Huang, Chenxu Lv, et al. Qwen3 technical report. arXiv preprint arXiv:2505.09388, 2025. [7] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt- 4 technical report. arXiv preprint arXiv:2303.08774, 2023. [8] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen- tau Yih, Tim Rocktaschel, et al. Retrieval- augmented generation for knowledge- intensive nlp tasks. Advances in neural information processing systems, 33:9459- 9474, 2020. [9] Bin Wang, Chao Xu, Xiaomeng Zhao, Linke Ouyang, Fan Wu, Zhiyuan Zhao, Rui Xu, Kaiwen Liu, Yuan Qu, Fukai Shang, et al. Mineru: An open- source solution for precise document content extraction. arXiv preprint arXiv:2409.18839, 2024. [10] Cheng Cui, Ting Sun, Manhui Lin, Tingquan Gao, Yubo Zhang, Jiaxuan Liu, Xueqing Wang, Zelun Zhang, Changda Zhou, Hongen Liu, et al. Paddleocr 3.0 technical report. arXiv preprint arXiv:2507.05595, 2025. [11] Docling Team. Docling. https://github.com/docling- project/docling, 2024. Accessed: 2025- 06- 23. [12] Jake Poznanski, Jon Borchardt, Jason Dunkelberger, Regan Huff, Daniel Lin, Aman Rangapur, Christopher Wilhelm, Kyle Lo, and Luca Soldaini. olmocr: Unlocking trillions of tokens in pdfs with vision language models. arXiv preprint arXiv:2502.18443, 2025. [13] Ahmed Nassar, Andres Marafioti, Matteo Omenetti, Maksym Lysak, Nikolaos Livathinos, Christoph Auer, Lucas Morin, Rafael Teixeira de Lima, Yusik Kim, A Said Gurbuz, et al. Smolodcing: An ultra- compact vision- language model for end- to- end multi- modal document conversion. arXiv preprint arXiv:2503.11576, 2025. <--- Page Split ---> [14] opendatalab. Mineru2.0- 2505- 0.9b. https://huggingface.co/opendatalab/Minerv2.0- 2505- 0.9B, 2025. [15] Mostafa Dehghani, Basil Mustafa, Josip Djolonga, Jonathan Heek, Matthias Minderer, Mathilde Caron, Andreas Steiner, Joan Puigcerver, Robert Geirhos, Ibrahim M Alabdulmohsin, et al. Patch n'pack: Navit, a vision transformer for any aspect ratio and resolution. Advances in Neural Information Processing Systems, 36:2252- 2274, 2023. [16] Linke Ouyang, Yuan Qu, Hongbin Zhou, Jiawei Zhu, Rui Zhang, Qunshu Lin, Bin Wang, Zhiyuan Zhao, Man Jiang, Xiaomeng Zhao, et al. Omnidocbench: Benchmarking diverse pdf document parsing with comprehensive annotations. In Proceedings of the Computer Vision and Pattern Recognition Conference, pages 24838- 24848, 2025. [17] Yian Zhao, Wenyu Lv, Shangliang Xu, Jinman Wei, Guanzhong Wang, Qingqing Dang, Yi Liu, and Jie Chen. Deters beat yolos on real- time object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 16965- 16974, 2024. [18] Xiuquan Hou, Meiqin Liu, Senlin Zhang, Ping Wei, Badong Chen, and Xuguang Lan. Relation detr: Exploring explicit position relation prior for object detection. In European Conference on Computer Vision, pages 89- 105. Springer, 2024. [19] Zilong Wang, Yiheng Xu, Lei Cui, Jingbo Shang, and Furu Wei. Layoutreader: Pre- training of text and layout for reading order detection. arXiv preprint arXiv:2108.11591, 2021. [20] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems, 36:34892- 34916, 2023. [21] Haoran Wei, Chenglong Liu, Jinyue Chen, Jia Wang, Lingyu Kong, Yanming Xu, Zheng Ge, Liang Zhao, Jianjian Sun, Yuang Peng, et al. General ocr theory: Towards ocr- 2.0 via a unified end- to- end model. arXiv preprint arXiv:2409.01704, 2024. [22] Kwai Keye Team, Biao Yang, Bin Wen, Changyi Liu, Chenglong Chu, Chengru Song, Chongling Rao, Chuan Yi, Da Li, Dunju Zang, et al. Kwai keye- vl technical report. arXiv preprint arXiv:2507.01949, 2025. [23] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415, 2016. [24] Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, et al. Qwen2. 5- vl technical report. arXiv preprint arXiv:2502.13923, 2025. [25] Ting Sun, Cheng Cui, Yuning Du, and Yi Liu. Pp- dcolayout: A unified document layout detection model to accelerate large- scale data construction. arXiv preprint arXiv:2503.17213, 2025. [26] Zhilu Zhang and Mert Sabuncu. Generalized cross entropy loss for training deep neural networks with noisy labels. Advances in neural information processing systems, 31, 2018. [27] PaddlePaddle Authors. Erniekit. https://github.com/PaddlePaddle/ERNIE, 2025. [28] Maksym Lysak, Ahmed Nassar, Nikolaos Livathanos, Christoph Auer, and Peter Staar. Optimized table tokenization for table structure recognition. In International Conference on Document Analysis and Recognition, pages 37- 50. Springer, 2023. <--- Page Split ---> [29] Cheng- Lin Liu, Fei Yin, Da- Han Wang, and Qiu- Feng Wang. Casia online and offline chinese handwriting databases. In 2011 international conference on document analysis and recognition, pages 37- 41. IEEE, 2011. [30] Bin Wang, Zhuangcheng Gu, Guang Liang, Chao Xu, Bo Zhang, Botian Shi, and Conghui He. Unimernet: A universal network for real- world mathematical expression recognition. arXiv preprint arXiv:2404.15254, 2024. [31] Philippe Gervais, Anastasiia Fadeeva, and Andrii Maksai. Mathwriting: A dataset for handwritten mathematical expression recognition. In Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V. 2, pages 5459- 5469, 2025. [32] Ahmed Masry, Do Xuan Long, Jia Qing Tan, Shafiq Joty, and Enamul Hoque. Chartqa: A benchmark for question answering about charts with visual and logical reasoning. arXiv preprint arXiv:2203.10244, 2022. [33] Nitesh Methani, Pritha Ganguly, Mitesh M Khapra, and Pratyush Kumar. Plotqa: Reasoning over scientific plots. In Proceedings of the ieee/cvf winter conference on applications of computer vision, pages 1527- 1536, 2020. [34] Shankar Kantharaj, Rixie Tiffany Ko Leong, Xiang Lin, Ahmed Masry, Megh Thakkar, Enamul Hoque, and Shafiq Joty. Chart- to- text: A large- scale benchmark for chart summarization. arXiv preprint arXiv:2203.06486, 2022. [35] Kushal Kafle, Brian Price, Scott Cohen, and Christopher Kanan. Dvqa: Understanding data visualizations via question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5648- 5656, 2018. [36] Ahmed Masry, Parsa Kavehzadeh, Xuan Long Do, Enamul Hoque, and Shafiq Joty. Unichart: A universal vision- language pretrained model for chart comprehension and reasoning. arXiv preprint arXiv:2305.14761, 2023. [37] Leilani Battle, Peitong Duan, Zachery Miranda, Dana Mukusheva, Remco Chang, and Michael Stonebraker. Beagle: Automated extraction and interpretation of visualizations from the web. In Proceedings of the 2018 CHI conference on human factors in computing systems, pages 1- 8, 2018. [38] Kenny Davila, Bhargava Urala Kota, Srirangaraj Setlur, Venu Govindaraju, Christopher Tensmeyer, Sumit Shekhar, and Ritwick Chaudhry. Icdar 2019 competition on harvesting raw tables from infographics (chart- infographics). In 2019 International Conference on Document Analysis and Recognition (ICDAR), pages 1594- 1599. IEEE, 2019. [39] Benny J Tang, Angie Boggust, and Arvind Satyanarayan. Vistext: A benchmark for semantically rich chart captioning. arXiv preprint arXiv:2307.05356, 2023. [40] Junyu Luo, Zekun Li, Jinpeng Wang, and Chin- Yew Lin. Chartocr: Data extraction from charts images via a deep hybrid framework. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 1917- 1925, 2021. [41] Xu Zhong, Elaheh ShafieiBavani, and Antonio Jimeno Yepes. Image- based table recognition: data, model, and evaluation, 2020. <--- Page Split ---> [42] Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhui Chen, Nigel Collier, and Yasemin Altun. Deplot: One-shot visual language reasoning by plot- to- table translation. arXiv preprint arXiv:2212.10505, 2022. [43] Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311- 318, 2002. [44] VI Lcvenshtcin. Binary coors capable or 'correcting deletions, insertions, and reversals. In Soviet physics- doklady, volume 10, 1966. [45] Vik Paruchuri. Marker. https://github.com/datalab- to/marker, 2025. Accessed: 2025- 09- 25. [46] Jinguo Zhu, Weiyun Wang, Zhe Chen, Zhaoyang Liu, Shenglong Ye, Lixin Gu, Hao Tian, Yuchen Duan, Weijie Su, Jie Shao, et al. Internvl3: Exploring advanced training and test- time recipes for open- source multimodal models. arXiv preprint arXiv:2504.10479, 2025. [47] Weiyun Wang, Zhangwei Gao, Lixin Gu, Hengjun Pu, Long Cui, Xingguang Wei, Zhaoyang Liu, Linglin Jing, Shenglong Ye, Jie Shao, et al. Internvl3. 5: Advancing open- source multimodal models in versatility, reasoning, and efficiency. arXiv preprint arXiv:2508.18265, 2025. [48] Google DeepMind. Gemini 2.5. https://blog.google/technology/google- deepmind/gemini- model- thinking- updates- march- 2025/, 2025. [49] chatdoc com. Ocflux. https://github.com/chatdoc- com/OCRFlux, 2025. Accessed:2025- 09- 25. [50] Mistral AI Team. Mistral- ocr. https://mistral.ai/news/mistral- ocr?utm_source=ai- bot.cn, 2025. [51] Souvik Mandal, Ashish Talewar, Paras Ahuja, and Prathamesh Juvatkar. Nanonets- ocr- s: A model for transforming documents into structured markdown with intelligent content recognition and semantic tagging, 2025. [52] rednote- hilab. dots.ocr: Multilingual document layout parsing in a single vision- language model, 2025. [53] Filimoa. open- parse. https://github.com/Filimoa/open- parse, 2024. Accessed: 2025- 06- 23. [54] Unstructured- IO. unstructured. https://github.com/Unstructured- IO/unstructured, 2022. Accessed: 2025- 06- 23. [55] breezedeus. Pix2text. https://github.com/breezedeus/Pix2Text, 2022. Accessed: 2025- 06- 23. [56] Mathpix. Mathpix snip: Convert images and pdfs to latex, docx, and more. https://mathpix.com/, 2025. <--- Page Split ---> [57] Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Muyan Zhong, Qinglong Zhang, Xizhou Zhu, Lewei Lu, et al. Internvl: Scaling up vision foundation models and aligning for generic visual- linguistic tasks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 24185- 24198, 2024. [58] Lukas Blecher, Guillem Cucurull, Thomas Scialom, and Robert Stojnic. Nougat: Neural optical understanding for academic documents. arXiv preprint arXiv:2308.13418, 2023. [59] Alex WC Lee, Jonathan Chung, and Marco Lee. Gnhk: a dataset for english handwriting in the wild. In International Conference on Document Analysis and Recognition, pages 399- 412. Springer, 2021. [60] Atsunobu Kotani, Stefanie Tellex, and James Tompkin. Generating handwriting via decoupled style descriptors. In European Conference on Computer Vision, pages 764- 780. Springer, 2020. [61] Yuan Yao, Tianyu Yu, Ao Zhang, Chongyi Wang, Junbo Cui, Hongji Zhu, Tianchi Cai, Haoyu Li, Weilin Zhao, Zhihui He, et al. Minicpm- v: A gpt- 4v level mllm on your phone. arXiv preprint arXiv:2408.01800, 2024. [62] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. Qwen2- vl: Enhancing vision- language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191, 2024. [63] Song Chen, Xinyu Guo, Yadong Li, Tao Zhang, Mingan Lin, Dongdong Kuang, Youwei Zhang, Lingfeng Ming, Fengyu Zhang, Yuran Wang, et al. Ocean- ocr: Towards general ocr application via a vision- language model. arXiv preprint arXiv:2501.15558, 2025. [64] Bin Wang, Fan Wu, Linke Ouyang, Zhuangcheng Gu, Rui Zhang, Renqiu Xia, Botian Shi, Bo Zhang, and Conghui He. Image over text: Transforming formula recognition evaluation with character detection matching. In 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 19681- 19690, 2025. [65] Liang Zhang, Anwen Hu, Haiyang Xu, Ming Yan, Yichen Xu, Qin Jin, Ji Zhang, and Fei Huang. Tinychart: Efficient chart understanding with visual token merging and program- of- thoughts learning. arXiv preprint arXiv:2404.16635, 2024. [66] Jinyue Chen, Lingyu Kong, Haoran Wei, Chenglong Liu, Zheng Ge, Liang Zhao, Jianjian Sun, Chunrui Han, and Xiangyu Zhang. Onechart: Purify the chart structural extraction via one auxiliary token. In Proceedings of the 32nd ACM International Conference on Multimedia, pages 147- 155, 2024. [67] Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the 29th symposium on operating systems principles, pages 611- 626, 2023. [68] Lianmin Zheng, Liangsheng Yin, Zhiqiang Xie, Chuyue Livia Sun, Jeff Huang, Cody Hao Yu, Shiyi Cao, Christos Kozyrakis, Ion Stoica, Joseph E Gonzalez, et al. Sglang: Efficient execution of structured language model programs. Advances in neural information processing systems, 37:62557- 62583, 2024. [69] PaddlePaddle Authors. Fastdeploy. https://github.com/PaddlePaddle/FastDeploy, 2025. <--- Page Split ---> [70] Qian Chen, Xianyin Zhang, Lifan Guo, Feng Chen, and Chi Zhang. Dianjin-ocr- r1: Enhancing ocr capabilities via a reasoning- and- tool interleaved vision- language model. arXiv preprint arXiv:2508.13238, 2025. [71] Lukas Blecher. pix2tex - latex ocr. https://github.com/lukas- blecher/LaTeX- OCR, 2022. Accessed: 2025- 06- 23. [72] Harold Mouchere, Christian Viard- Gaudin, Richard Zanibbi, and Utpal Garain. Icfhr 2014 competition on recognition of on- line handwritten mathematical expressions (crohme 2014). In 2014 14th international conference on frontiers in handwriting recognition, pages 791- 796. IEEE, 2014. [73] Harold Mouchere, Christian Viard- Gaudin, Richard Zanibbi, and Utpal Garain. Icfhr2016 crohme: Competition on recognition of online handwritten mathematical expressions. In 2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR), pages 607- 612. IEEE, 2016. [74] Mahshad Mahdavi, Richard Zanibbi, Harold Mouchere, Christian Viard- Gaudin, and Utpal Garain. Icdar 2019 crohme+ tfd: Competition on recognition of handwritten mathematical expressions and typeset formula detection. In 2019 International Conference on Document Analysis and Recognition (ICDAR), pages 1533- 1538. IEEE, 2019. [75] Ye Yuan, Xiao Liu, Wondimu Dikubab, Hui Liu, Zhilong Ji, Zhongqin Wu, and Xiang Bai. Syntax- aware network for handwritten mathematical expression recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4553- 4562, 2022. [76] Tao Ge, Xin Chan, Xiaoyang Wang, Dian Yu, Haitao Mi, and Dong Yu. Scaling synthetic data creation with 1,000,000,000 personas. arXiv preprint arXiv:2406.20094, 2024. <--- Page Split ---> ## Appendix ### A. Training Dataset Details This two- stage approach offers unique advantages in terms of data collection, as obtaining isolated element imagesalong with their annotations is more feasible than collecting complete document pages containing different elements. In the following sections, we will elaborate on the construction of multimodal model training data for text, tables, formulas, and charts. ## A.1. Text We have curated a large- scale dataset comprising 20 Million High- Quality Image- Text Pairs. As shown in Figure A1, the dataset generation follows a rigorous multi- stage pipeline which primarily involves: ![](images/24_0.jpg) <center>Figure A1 | The construction method and characteristics of the text training data for PaddleOCR-VL-0.9B. </center> 1. Automatic Data Annotation: We design an automatic annotation pipeline that integrates lightweight document-structure models with large multimodal language models. Specifically, PP-StructureV3 is employed as an expert model to perform layout analysis and text recognition, generating pseudo labels that are converted into prompts for multimodal models such as ERNIE-4.5-VL and Qwen2.5-VL to refine. Finally, the refined labels are aggregated and randomly merged at multiple granularities to produce 20 million high-quality image-text training samples. 2. High-quality OCR Data Synthesis: During data distillation, low label quality in challenging scenarios like messy handwriting and dense blurry text was addressed by expanding the dataset through synthetic generation. Utilizing diverse CSS styles, over 200 fonts, and various corpora, we rendered a large amount of images, thereby enhancing the model's capabilities in these difficult scenarios. Ultimately, the data is meticulously annotated at three distinct hierarchical levels: text lines, text blocks, and text pages. With extensive language coverage of 109 languages, including major global ones like Chinese, English, French, and Hindi. It includes diverse scenes including Academic Papers, Newspapers, Handwritten texts, Ancient books, Id cards, tickets, seals, etc. Additionally, the dataset addresses compatibility with a variety of writing systems and text styles, covering Printing, Handwriting, Scanned text, Artistic Fonts, etc. <--- Page Split ---> ## A.2. Table As shown in Figure A2, we constructed a large- scale dataset of over 5 million high- quality image- table pairs. Our dataset construction employs three key strategies: automatic data annotation, potential annotation mining, and high- quality data synthesis. For coding efficiency, we adopt OTSL [28] as the model's target format instead of conventional HTML. The main dataset construction process is as follows: ![](images/25_0.jpg) <center>Figure A2 | The construction method and characteristics of the table training data for PaddleOCR-VL-0.9B. </center> 1. Automatic Data Annotation: To enhance the performance of PaddleOCR-VL in table recognition, we built a large-scale, diverse dataset covering various languages, border styles, and table types. Tables are first located using PP-StructureV3 [10]. For unlabeled images, we employed a multi-stage annotation pipeline: ERNIE-4.5-VL [5] first generates pseudo-labels, which are then validated by a ERNIE-4.5-VL-28B-A3B [5] as discriminative model. Rejected annotations are refined using Dianjin-OCR-R1 [70] (for tools, we use ERNIE-4.5-VL and PP-StructureV3 [10]). Finally, all annotations undergo rigorous rule-based verification, including n-gram analysis and HTML validation, to ensure only high-quality samples are used for training. ## 2. Potential Annotation Mining: For public data with potential annotations (e.g., from arXiv), we extract tables and their corresponding official- supported HTML source code. We then employ a mechanism combining regular expression matching with contextual and sequential alignment to construct accurate table- HTML pairs. The extracted HTML subsequently undergoes rule- based filtering, yielding high- quality data samples ready for model training. ## 3. High-quality Table Synthesis: To overcome data imbalance and high annotation costs, we introduce an innovative high- quality table synthesis tool which constitutes the cornerstone of our table data collection pipeline. This tool enables both randomized synthesis for comprehensive data supplement and targeted synthesis to enhance recognition of specific table categories. Specifically, we first leverage LLMs to gather a diverse and extensive corpus. Then, our tool generates table training pairs through randomized configurations of structures, fonts, CSS styles, and textual content, while also supporting customized synthesis by specifying particular parameters to accurately simulate specialized table types. With a synthesis speed of 10,000 samples per hour, our tool has produced over 5,500,000 training instances, substantially enhancing our model's generalization capability and comprehensive performance in table <--- Page Split ---> recognition. Through the aforementioned data construction strategies, we build a comprehensive table dataset encompassing diverse table categories and recognition scenarios, thereby providing robust support for training our model in the table recognition task. ## A.3. Formula As shown in Figure A3, this dataset was developed using a range of strategies, including source code rendering, automatic data annotation, targeted synthesis of long- tail data, and public data collection. It encompasses a variety of formula scenarios, such as educational supplementary materials, test papers for primary and secondary schools, mathematical papers, PowerPoint courseware, university theses, financial research reports, and handwritten mathematical notes. The dataset features four types of formulas: Simple Printed Expressions, Complex Printed Expressions, Screen- Captured Expressions, and Handwritten Expressions, available in both Chinese and English. The main process for constructing the dataset is as follows: ![](images/26_0.jpg) <center>Figure A3 | The construction method and characteristics of the formula training data for PaddleOCR-VL-0.9B. </center> 1. Source Code Rendering: To enhance the model's adaptability to a wide variety of unusual formula structures, a large amount of paper source code was scraped from arXiv, and LaTeX code for the formulas was extracted using regular expressions. Then, MinHash was used to remove duplicate and highly similar formula source codes, and KaTeX was employed to normalize the formula source codes, thereby reducing their ambiguity. Finally, the formulas were re-rendered into images using a formula rendering engine. 2. Automatic Data Annotation: For real-world formula data from exam papers, educational materials, and handwritten notes, the process begins with the use of the layout analysis method PP-StructureV3 [10] to identify the bounding boxes for formulas. Based on these bounding boxes, formula regions are cropped from the images. Subsequently, large multimodal language models, such as ERNIE-4.5-VL-28B-A3B [5], are employed to <--- Page Split ---> generate the LaTeX source code for these formulas. Given the rarity of Chinese formulas in real- world scenarios—where approximately 1 out of 100 formulas contains Chinese characters—PP- OCRv5 [10] is utilized to recognize characters within the cropped regions, enabling targeted optimization when Chinese characters are detected. Due to the complex and diverse nature of real- world formulas, recognition errors may occur with existing large models. To address this, a LaTeX rendering engine is used to filter the formulas generated by these models. Specifically, image- formula pairs that cannot be successfully rendered by xelatex are discarded. For those that render successfully, a more in- depth screening is conducted by comparing metrics such as the aspect ratio between the recognized image and the rendered image. 3. Targeted Synthesis of Long-tail Data: For certain long-tail formula structures, such as elementary school vertical calculations, formulas with strikethroughs, and handwritten formulas with explanatory arrows, existing multimodal large models struggle to accurately recognize them due to data distribution issues. To address this, LaTeX code is synthetically generated based on rules and inverse rendering is performed using a LaTeX rendering engine, thereby constructing image-formula matching pairs for these long-tail scenarios. 4. Public Data Collection: In order to enable the model to learn high-quality formula representations, a substantial amount of data has been collected from existing public datasets, including UniMER-1M [30] and MathWriting [31]. Specifically, UniMER-1M is oriented towards real document scenarios and has gathered 1 million formula data from arXiv, Pix2tex [71], CROHME [72, 73, 74], and HME100K [75]. On the other hand, MathWriting is currently the largest handwritten mathematical formula dataset, comprising 230,000 real handwritten formula samples and 400,000 synthetic handwritten formula samples. ## A.4. Chart We constructed a large- scale, bilingual (Chinese and English) dataset of over 0.8 million high- quality image- chart pairs. Our dataset construction employs four key strategies: public data collection and cleaning, automatic data annotation, data synthesis, and targeted long- tail data augmentation. The dataset covers a wide array of chart types from diverse sources, including academic papers, financial reports, and web pages. The main dataset construction process is as follows: ![](images/27_0.jpg) <center>Figure A4 | The construction method and characteristics of the chart training data for PaddleOCR-VL-0.9B. </center> <--- Page Split ---> 1. Public Data Collection and Cleaning: We collected a large number of samples from public datasets, including ChartQA [32], PlotQA [33], Chart2Text [34], DVQA [35], Unichart [36], Beagle [37], ChartINFO [38], visText [39], and ExcelChart [40]. However, the raw datasets suffered from poor annotation quality and extremely imbalanced data distributions. Thus, a meticulous data cleaning and filtering pipeline was implemented to remove noisy samples and ensure balanced clustering, resulting in a high-quality dataset of 220k samples. 2. Automatic Data Annotation: To annotate our large collection of unlabeled public and in-house data, we developed a two-stage annotation pipeline based on the Vision Large Language Model ERNIE-4.5-VL [5]. In the first stage, the model extracts tick labels from the x- and y-axes; in the second, random permutations of these labels are used to query corresponding data points, framing annotation as a data retrieval task. A final consistency check ensures that only verified annotations are included in the training set, guaranteeing high reliability. 3. Data Synthesis: To capture diverse visual styles and enhance model generalization, we designed a three-stage data synthesis pipeline. It begins with a large collection of base data tables, followed by an LLM Persona [76] strategy using ERNIE-X1 [5], which diversifies table content and generates persona-specific rendering code. This enables control over chart aesthetics such as color, font, and layout. Leveraging a billion distinct personas, the pipeline produces highly varied data structures and visual styles, substantially improving PaddleOCR-VL's generalization across real-world charts. For rendering, we employ matplotlib and seaborn. 4. Targeted Long-tail Data Augmentation: To improve generalization on real-world long-tail samples, we designed a data augmentation pipeline based on seed charts. It first selects long-tail samples by their distinctive visual features, then uses ERNIE-4.5-VL [5] to replicate their rendering code. ERNIE-X1 [5], guided by a specific persona [76], further diversifies the code by altering data tables and visual styles. Executing the modified code produces new augmented charts with corresponding data tables. Through the four data construction strategies mentioned above, the final chart dataset covers a wide range of application scenarios and a rich variety of chart styles, providing strong support for the training of chart models. <--- Page Split ---> ## B. Supported Languages B. Supported LanguagesPaddleOCR-VL supports a total of 109 languages. Table 6 in the main text shows the text line recognition accuracy for different languages. Table A1 lists the correspondence between each language category and the specific supported languages. Table A1 | Supported Languages <table><tr><td>Language Category</td><td>Specific Languages</td></tr><tr><td>Chinese</td><td>Chinese</td></tr><tr><td>English</td><td>English</td></tr><tr><td>Korean</td><td>Korean</td></tr><tr><td>Japanese</td><td>Japanese</td></tr><tr><td>Thai</td><td>Thai</td></tr><tr><td>Greek</td><td>Greek</td></tr><tr><td>Tamil</td><td>Tamil</td></tr><tr><td>Telugu</td><td>Telugu</td></tr><tr><td>Arabic</td><td>Arabic, Persian, Uyghur, Urdu, Pashto, Kurdish, Sindhi, Balochi</td></tr><tr><td>Latin</td><td>French, German, Afrikaans, Italian, Spanish, Bosnian, Portuguese, Czech, Welsh, Danish, Estonian, Irish, Croatian, Uzbek, Hungarian, Serbian (Latin), Indonesian, Occitan, Icelandic, Lithuanian, Maori, Malay, Dutch, Norwegian, Polish, Slovak, Slovenian, Albanian, Swedish, Swahili, Tagalog, Turkish, Latin, Azerbaijani, Kurdish, Latvian, Maltese, Pali, Romanian, Vietnamese, Finnish, Basque, Galician, Luxembourgish, Romansh, Catalan, Quechua</td></tr><tr><td>Cyrillic</td><td>Russian, Belarusian, Ukrainian, Serbian (Cyrillic), Bulgarian, Mongolian, Abkhazian, Adyghe, Kabardian, Avar, Dargin, Ingush, Chechen, Lak, Lezgin, Tabasaran, Kazakh, Kyrgyz, Tajik, Macedonian, Tatar, Chuvash, Bashkir, Malian, Moldovan, Udmurt, Komi, Ossetian, Buryat, Kalmyk, Tuvan, Sakha, Karakalpak</td></tr><tr><td>Devanagari</td><td>Hindi, Marathi, Nepali, Bihari, Maithili, Angika, Bhojpuri, Magahi, Santali, Newari, Konkani, Sanskrit, Haryanvi</td></tr></table> <--- Page Split ---> ## C. Inference Performance on Different Hardware Configurations We measured the inference performance of PaddleOCR- VL on different hardware configurations, as summarized in Table A2. As observed, PaddleOCR- VL demonstrates stable and efficient inference performance across a wide range of hardware and backend configurations, showing that the system can flexibly adapt to diverse computing environments. Moreover, we are currently integrating the FastDeploy backend, which is expected to further enhance inference efficiency in future releases. Table A2 | End-to-End Inference Performance <table><tr><td>Hardware</td><td>Backend</td><td>Total Time (s)↓</td><td>Pages/s↑</td><td>Tokens/s↑</td><td>Avg. VRAM Usage (GB)↓</td></tr><tr><td rowspan="2">A100</td><td rowspan="2">vLLM SGLang</td><td>800.9</td><td>1.2241</td><td>1881.2</td><td>43.7</td></tr><tr><td>917.6</td><td>1.0684</td><td>1641.5</td><td>49.8</td></tr><tr><td rowspan="2">A10</td><td rowspan="2">vLLM SGLang</td><td>1238.0</td><td>0.7921</td><td>1217.2</td><td>14.1</td></tr><tr><td>1429.9</td><td>0.6858</td><td>1055.8</td><td>20.0</td></tr><tr><td rowspan="2">RTX 3060</td><td rowspan="2">vLLM SGLang</td><td>2749.1</td><td>0.3568</td><td>548.2</td><td>11.9</td></tr><tr><td>2792.4</td><td>0.3513</td><td>540.8</td><td>11.8</td></tr><tr><td>RTX 5070</td><td>vLLM</td><td>1292.9</td><td>0.7584</td><td>1165.5</td><td>8.9</td></tr><tr><td rowspan="2">RTX 4090D</td><td rowspan="2">vLLM SGLang</td><td>845.3</td><td>1.1597</td><td>1781.8</td><td>16.7</td></tr><tr><td>951.8</td><td>1.0303</td><td>1586.1</td><td>21.8</td></tr></table> <--- Page Split ---> ### D. Real-world Samples This appendix showcases the parsing and recognition capabilities of our proposed algorithm across a variety of challenging scenarios. Section D.1 demonstrates the overall document parsing capability of PaddleOCR- VL. Figures A5- A8 are examples of parsing different types of documents in Markdown format. Figures A9- A11 in section D.2 illustrate the superior ability of PaddleOCR- VL to process pages featuring intricate or challenging layouts. Figures A12 and A13 in section D.3 demonstrate that PaddleOCR- VL maintains excellent reading order when faced with complex layouts, such as those found in various reports, textbooks, newspapers, magazines, and even vertical documents. Section D.4 highlights the robust text recognition performance of PaddleOCR- VL in challenging cases, including multilingual text, handwriting text, and vertical text, which are presented in Figures A14- A22. The model's table recognition abilities are demonstrated in section D.5. Figures A23 and A24 showcase its robust handling of a wide array of table formats, including tables from academic papers, tables from financial reports, tables with watermark, tables with image, tables with formulas and photograph of tables. Figures in section D.6 detail the formula recognition performance. Figure A25 demonstrates the ability to handle various types of english formulas including complex printed expressions, handwritten expressions screen- captured expressions and vertical formula, while Figure A26 focuses on the ability to handle formulas that contain Chinese characters. In section D.7, PaddleOCR- VL demonstrates impressive chart recognition capabilities, a feature currently lacking in many expert OCR VLMs like MinerU2.5 [14], dots.ocr [52] or MonkeyOCR [1]. Figures A27- A29 showcase our ability to parse various chart types, including pie charts, bar charts, line charts, bar- line hybrid charts and heatmap. <--- Page Split ---> ![](images/32_0.jpg) <center>Figure A5 | The Layout and Markdown Output for Book, Textbook and Academic Paper.</center> <--- Page Split ---> ![](images/33_0.jpg) <center>Figure A6 | The Layout and Markdown Output for Research Report(with chart recognition enabled), Financial Report, Slides and Exam Paper.</center> <--- Page Split ---> ![](images/34_0.jpg) <center>Figure A7 | The Layout and Markdown Output for Notes, Vertical Book and Ancient Book.</center> <--- Page Split ---> ![](images/35_0.jpg) <center>Figure A8 | The Layout and Markdown Output for Certificate, Newspaper and Magazine. </center> <--- Page Split ---> ## D.2. Layout Detection ![](images/36_0.jpg) <center>Figure A9 | The Layout Detection results for various types of documents. </center> <--- Page Split ---> ![](images/37_0.jpg) <center>Figure A10 | The Layout Detection results for various types of documents.</center> <--- Page Split ---> ![](images/38_0.jpg) <center>Figure A11 | The Layout Detection results for various types of documents. </center> <--- Page Split ---> ## D.3. Reading Order ![](images/39_0.jpg) <center>Figure A12 | The Reading Order results for various types of documents. </center> <--- Page Split ---> ![](images/40_0.jpg) <center>Figure A13 | The Reading Order results for various types of documents. </center> <--- Page Split ---> yet there remains a gap between academic research prototypes and production-ready systems capable of supporting the stringent requirements of dataset construction, RAG workflows, and large-scale document intelligence. ## PaddleOCR 1x & 2x: Advancements and Innovations in Open-Source OCR Technology PaddleOCR has emerged as a prominent open- source project addressing these multifaceted challenges. Since its initial release in 2020, PaddleOCR has adhered to the principles of comprehensive coverage, end- to- end workflow, and lightweight efficiency, setting new standards for both usability and technical excellence in the OCR domain. Anchored by the PP- OCR series, PaddleOCR has evolved through multiple iterations—each pushing the boundaries of text detection, recognition, and document analysis. Early versions such as PP- OCR- 1 (1st ed., 2020) focused on achieving an optimal balance between accuracy and speed, making OCR accessible for resource- constrained environments. Subsequent releases (PP- OCR- 2 (1st ed., 2021), v3 (1st ed., 2022), and v4) incrementally improved performance, extended language coverage, and introduced sophisticated models for handwriting and rare character recognition. A notable advancement has been the integration of document structural understanding via the PP- Structure series, enabling PaddleOCR to move beyond text lines and paragraphs to address complex layout analysis, table structure recognition (e.g., SLANet(1st ed., 2022a)), and other advanced parsing tasks. These capabilities have made PaddleOCR a critical engine for automated document processing, intelligent archiving, information extraction, and, increasingly, for supporting the data pipelines of LLMs and RAG systems. The adoption and impact of PaddleOCR in both academic and industrial communities are evidenced by its widespread use and vibrant developer ecosystem. With more than 50,000 stars on GitHub as of June 2023, and its deployment as the core OCR engine in projects such as MinerU (Wang et al., 2024), RAGFlow (KevinHsu, 2023), and UniOCR (Jin et al., 2022), PaddleOCR has become an indispensable tool for digitization initiatives, knowledge management platforms, and AI- driven document analysis workflows. Notably, PaddleOCR has played a central role in the construction of high- quality document datasets for large model training, enabling researchers to assess diverse, accurately annotated corpora spanning multiple languages, domains, and document types. Its modular architecture and rich API ecosystem facilitate seamless integration with RAG pipelines, where efficient and accurate OCR is essential for document ingestion, retrieval indexing, and content processing for modern applications. As PaddleOCR's user base has expanded, so has the range of feedback and requirements from the community. Users have highlighted persistent needs in areas such as robust handwriting recognition, improved support for multi- language and rare script recognition, more powerful document parsing for complex layouts, and advanced key information extraction. These demands are further amplified by the growing scale and dynamic of LLM and RAG applications, where the ability to extract, structure, and semantically interpret information from diverse documents is a prerequisite for building reliable, responsive, and intelligent systems. Aware of these trends and our responsibility as a leading open- source platform, we remain committed to continuously improving PaddleOCR to meet the evolving challenges of the field. ## PaddleOCR 3.0: A New Milestone in Enhancing Text Recognition and Document Parsing In this context, we introduce PaddleOCR 3.0, a major release designed to systematically enhance text recognition accuracy and document parsing capabilities, with a particular focus on the complex scenarios encountered in modern AI applications. PaddleOCR 3.0 encompasses several core innovations. First, it presents the high- precision text recognition pipeline PP- OCR- 5, which leverages advanced model architectures and training strategies to deliver state- of- the yet there remains a gap between academic research prototypes and production- ready systems capable of supporting the stringent requirements of dataset construction, RAG workflows, and large- scale document intelligence. ## PaddleOCR 1x & 2x: Advancements and Innovations in Open-Source OCR Technology PaddleOCR has emerged as a prominent open- source project addressing these multifaceted challenges. Since its initial release in 2020, PaddleOCR has adhered to the principles of comprehensive coverage, end- to- end workflow, and lightweight efficiency, setting new standards for both usability and technical excellence in the OCR domain. Anchored by the PP- OCR series, PaddleOCR has evolved through multiple iterations—each pushing the boundaries of text detection, recognition, and document analysis. Early versions such as PP- OCR- 1 (1st ed., 2020) focused on achieving an optimal balance between accuracy and speed, making OCR accessible for resource- constrained environments. Subsequent releases (PP- OCR- 2 (1st ed., 2021), v3 (1st ed., 2022), and v4) incrementally improved performance, extended language coverage, and introduced sophisticated models for handwriting and rare character recognition. A notable advancement has been the integration of document structural understanding via the PP- Structure series, enabling PaddleOCR to move beyond text lines and paragraphs to address complex layout analysis, table structure recognition (e.g., SLANet(1st ed., 2022a)), and other advanced parsing tasks. These capabilities have made PaddleOCR a critical engine for automated document processing, intelligent archiving, information extraction, and, increasingly, for supporting the data pipelines of LLMs and RAG systems. The adoption and impact of PaddleOCR in both academic and industrial communities are evidenced by its widespread use and vibrant developer ecosystem. With more than 50,000 stars on GitHub as of June 2023, and its deployment as the core OCR engine in projects such as MinerU (Wang et al., 2024), RAGFlow (KevinHsu, 2023), and UniOCR (Jin et al., 2022), PaddleOCR has become an indispensable tool for digitization initiatives, knowledge management platforms, and AI- driven document analysis workflows. Notably, PaddleOCR has played a central role in the construction of high- quality document datasets for large model training, enabling researchers to assess diverse, accurately annotated corpora spanning multiple languages, domains, and document types. Its modular architecture and rich API ecosystem facilitate seamless integration with RAG pipelines, where efficient and accurate OCR is essential for document ingestion, retrieval indexing, and content processing for modern applications. As PaddleOCR's user base has expanded, so has the range of feedback and requirements from the community. Users have highlighted persistent needs in areas such as robust handwriting recognition, improved support for multi- language and rare script recognition, more powerful document parsing for complex layouts, and advanced key information extraction. These demands are further amplified by the growing scale and dynamic of LLM and RAG applications, where the ability to extract, structure, and semantically interpret information from diverse documents is a prerequisite for building reliable, responsive, and intelligent systems. Aware of these trends and our responsibility as a leading open- source platform, we remain committed to continuously improving PaddleOCR to meet the evolving challenges of the field. ## PaddleOCR 3.0: A New Milestone in Enhancing Text Recognition and Document Parsing In this context, we introduce PaddleOCR 3.0, a major release designed to systematically enhance text recognition accuracy and document parsing capabilities, with a particular focus on the complex scenarios encountered in modern AI applications. PaddleOCR 3.0 encompasses several core innovations. First, it presents the high- precision text recognition pipeline PP- OCR- 5, which leverages advanced model architectures and training strategies to deliver state- of- the- art results. ## Arabic ![](images/41_0.jpg) <center>Figure A16 | The markdown output for English and Arabic documents. </center> <--- Page Split ---> ![](images/42_0.jpg) ## Chinese ## 2025年全国高中数学联赛江西省预赛 ## 试题参考答案 (6月22日上午9:30—12:00) 1. 若圆 \(C_1: x^2 + y^2 - 2x - 2y = 6\) 与圆 \(C_2: x^2 + y^2 - 4x - 4y = k\) 有唯一交点,则 \(k =\) 答案:- 6或10. 解:由题图 \(C_1:(x - 1)^2 +(y - 1)^2 = (2\sqrt{2})^2\) ,圆 \(C_2:(x - 2)^2 +(y - 2)^2 = k + 8\) ,题设等价于圆 \(C_2\) 的半径分别为 \(\sqrt{2}\) 或 \(3\sqrt{2}\) ,所以 \(k + 8 = 2\) 或18,所以 \(k = - 6\) 或10. 2. 设复数 \(z\) 满足 \((1 + 2\mathrm{i})z = (3 - 4\mathrm{i})\) ,则 \((1 + 2\mathrm{i})^2\) 的值为 答案:- 16+16i. 解:由题 \(z = \frac{5}{1 + 2\mathrm{i}} = 1 - 2\mathrm{i}\) ,所以 \((1 + 2\mathrm{i})^2 = (2 + 2\mathrm{i})^2 = 8\cdot (2 + 2\mathrm{i}) = - 16 + 16\mathrm{i}\) 3. 半径为 \(\sqrt{6}\) 的大球内部装四个半径相等的小球,则小球的最大半径 为 答案:6- 2√6. 解:由题知小球半径最大时四个小球的球心构成一个正三棱锥 \(A - BCD\) 且小球内切于大球,大球球心为正三棱锥 \(A - BCD\) 的中心,如图1,为便于计算边长,先设正三棱锥的棱长为6,容易计算得 \(OA = \frac{3\sqrt{6}}{2}\) ,故大球与小球半径比为 \(\frac{3\sqrt{6}}{2} +\frac{3}{2} = \frac{\sqrt{6} + 2}{2}\) 故小球的最大半径为 \(\frac{2\sqrt{6}}{6 + 2} = 6 - 2\sqrt{6}\) ![](images/42_1.jpg) <center>图1</center> 4. 函数 \(f(x) = \cos (\omega x + \frac{\pi}{4})(\omega >0)\) , \(x = \frac{\pi}{4}\) 是函数的一个零点, \(x = \frac{\pi}{4}\) 是函数图像的一条对称轴, \((\frac{\pi}{12},\frac{\pi}{9})\) 是 \(f(x)\) 的一个单调区间,则 \(\omega\) 的最大值为 ## 2025年全国高中数学联赛江西省预赛 ## 试题参考答案 (6月22日上午9:30- 12:00) ## 一、填空题(每小题7分,共56分) 1. 若圆 \(C_1: x^2 + y^2 - 2x - 2y = 6\) 与圆 \(C_2: x^2 + y^2 - 4x - 4y = k\) 有唯一交点,则 \(k =\) 答案:- 6或10. 解:由题图 \(C_1:(x - 1)^2 +(y - 1)^2 = (2\sqrt{2})^2\) ,圆 \(C_2:(x - 2)^2 +(y - 2)^2 = k + 8\) ,题设等价于圆 \(C_2\) 的半径分别为 \(\sqrt{2}\) 或 \(3\sqrt{2}\) ,所以 \(k + 8 = 2\) 或18,所以 \(k = - 6\) 或10. 2. 设复数 \(z\) 满足 \((1 + 2\mathrm{i})z = (3 - 4\mathrm{i})\) ,则 \((1 + 2\mathrm{i})^2\) 的值为 答案:- 16+16i 解:由题 \(z = \frac{5}{1 + 2\mathrm{i}} = 1 - 2\mathrm{i}\) ,所以 \((1 + 2\mathrm{i})^2 = (2 + 2\mathrm{i})^2 = 8\cdot (2 + 2\mathrm{i}) = - 16 + 16\mathrm{i}\) 16i 答案:6- 2√6. 解:由题知小球半径最大时四个小球的球心构成一个正三棱锥 \(A - BCD\) 且小球内切于大球,大球球心为正三棱锥 \(A - BCD\) 的中心,如图1,为便于计算边长,先设正三棱锥的棱长为6,容易计算得 \(OA = \frac{3\sqrt{6}}{2}\) ,故大球与小球半径比为 \(\frac{3\sqrt{6}}{2} +\frac{3}{2} = \frac{\sqrt{6} + 2}{2}\) ,故小球的最大半径为 \(\frac{2\sqrt{6}}{6 + 2} = 6 - 2\sqrt{6}\) ![](images/42_2.jpg) <center>图1</center> 4. 函数 \(f(x) = \cos (\omega x + \frac{\pi}{4})(\omega >0)\) , \(x = \frac{\pi}{4}\) 是函数的一个零点, \(x = \frac{\pi}{4}\) 是函数图像的一条对称轴, \((\frac{\pi}{12},\frac{\pi}{9})\) 是 \(f(x)\) 的一个单调区间,则 \(\omega\) 的最大值为 Figure A17 | The markdown output for German and Chinese documents. <--- Page Split ---> ## D.4.2. Handwriting Text Recognition ![](images/43_0.jpg) ![](images/43_1.jpg) <center>Figure A20 | The markdown output for Mixed Printed Handwritten Text and Handwritten Formula documents.</center> <--- Page Split ---> 周末腌腊肉,一半用来做红烧肉,柴火红烧出来的红烧肉特别香,外堡总会烧上一大盘,每天挖一碗出来吃。冬天的平台特别冷,就这样放一个月也不会坏。下雪了,外堡就不出门干活了,唐我和妹妹坐在火桶里玩扑克,我们坐在火桶里玩扑克。我们坐在火桶里玩扑克。我们坐在火桶里玩扑克。我们坐在这里,看着远处高山上的积雪越来越厚,成了一个银装素裹的世界。 上高中后,因为要补课,我们便不回外婆家了,再后来,吹习惯了空调,便再也没回外婆家住过。只是偶尔回去看看外公,呆上半天便又走了。外婆是童养媳,从小便和外公订了班送亲,他们性格不合,总是吵架,所以只到外公一人在老家,外婆一直和我们生活在一起,外婆是什么时候呢?是吃呢?是从我外姑念大学?还是我参加工作?抑或是结婚生孩子?而我又是什么时候从天天离不开她怀抱的小女孩到如今连和地好好聊今天都不行呢?想到这里,我很难受,外婆的一生都是围着我们几个,现在年纪大了,很多朋友也去世了,想找个人聊一聊似乎成了一件奢侈的事情,只希望疫情快点过去,还能有礼 用来腌腊肉,一半用来做红烧肉。柴火红烧出来的红烧肉特别香。外婆总会烧上一大盘,每天挖一碗出来吃,冬天的平台特别冷,就这样放一个月也不会坏。下雪了,外婆就不出门干活了,陪我和妹妹坐在火桶里玩扑克。我们坐在温暖的火桶里,看着远处高山上的积雪越来越厚,成了一个银装素裹的世界。 上高中后,因为要补课,我们便不回外婆家了,再后来,吹习惯了空调,便再也没回外婆家住过。只是偶尔回去看看外公,呆上半天便又走了。外婆是童养媳,从小便和外公订了班送亲,他们性格不合,总是吵架,所以只到外公一人在老家,外婆一直和我们生活在一起。外婆是什么时候变老了呢?是从我外出念大学?还是我参加工作?抑或是结婚生孩子?而我又是什么时候从天天离不开她怀抱的小女孩到如今连和她好好聊今天都不行呢?想到这里,我很难受,外婆的一生都是围着我们几个,现在年纪大了,很多朋友也去世了,想找个人聊一聊似乎成了一件奢侈的事情。只希望疫情快点过去,还能有礼 ## Handwriting English ![](images/44_0.jpg) <center>Figure A21 | The markdown output for Handwriting Chinese and Handwriting English documents.</center> Keep an eye on the clock and make sure you have enough time to answer all the questions. In addition, I want to remind you to take care of yourself during this stressful time. Make sure to get enough rest, eat well, and take breaks to relax and clear your mind. Your physical and mental well- being is important, and it will help you perform better in the exam. In conclusion, I want to wish you all the best of luck in the middle school entrance examination. Stay calm, stay focused, take care of yourself, and believe in yourself. You have the potential to achieve great things and I am confident that you will succeed. Good luck! <--- Page Split ---> D.4.3. Vertical Text Recognition ![](images/45_0.jpg) 龙 Chinese Loong 祥瑞四灵之一。 龙,中国古代传说中的鳞虫之长,能兴云雨致万物,是善于变化的神兽。中国人自称「龙的传人」,龙是中国最重要的文化图腾之一,亦是中华民族精神的象征。 中国古代最早的辞典《尔雅》这样描述龙:“九似者,角似鹿,头似驼,眼似兔,项似蛇,腹似蜃,鳞似鱼,爪似鹰,掌似虎,耳似牛。”龙的基本形态是蛇,也是多种动物和自然元素的开放集合。在先民的想象中,龙能显能隐、能细能巨、能短能长,更能呼风唤雨,具有超凡的威力。龙在历史中还有过很多种类,“有鳞曰蛟龙,有翼曰应龙,有角曰虬龙,无角曰螭龙”。又有“龙生九子”的说法,龙的几个儿子在人间各司其职。几千年来,封建帝王把龙当作权力和尊严的象征。皇帝被称为“真龙天子”,凡与皇帝有关的事物皆冠以“龙”字,如:“龙颜”、“龙袍”、“龙神”等。在民间,龙代表着智慧,是贤德人才的象征。相传孔子见到老子后,认为老子是“人中之龙”。诸葛亮出山之前都拜陈留,自称“卧龙先生”。现在,人们也常用“望子成龙”表示希望子孙出人头地。同时,龙代表着吉庆、祥和,寓意生意于民,风调雨顺,是祥瑞之兆。人们以赛龙舟、点龙灯的方式祈盼幸福的生活。 ![](images/45_1.jpg) 龍自在菩薩行深般若波羅蜜多時照見五蘊皆空度一切苦厄舍利子色不異空不異色即是空空即是色受想行識亦復如是舍利子是諸法空相不生不滅不垢不淨不增不減是故空中無色無受想行識無眼耳鼻舌身意無色聲香味觸 ![](images/45_2.jpg) 夕風の中日暮わだけが通り過ぎて中人がツと光って咲いた。茎火を見た。きっとまだ終わらない夏が。暖味な心を解かして調いだ。この夜が続いて欲しかった。東誠顕 Vertical Chinese Calligraphy 觀自在菩薩行深般若波羅蜜多時照見五蘊皆空度一切苦厄舍利子色不異空不異色即是空空即是色受想行識亦復如是舍利子是諸法空相不生不滅不垢不淨不增不減是故空中無色無受想行識無眼耳鼻舌身意無色聲香味觸 Ancient Chinese Scrolls 按天機金玉過功九夫五女史傳存于三秘文 吉凶禍福百百中悟去靈通人不可去難有是 盡智度法度皆至多歸不傳以道相 龍穴度不測無不真度身分全不智真以 致智用應禍福不測深病之論明師且行 學解要領從洪武二十三年至二十五年三編 訂矣因通張宗道會問而測地度之盈變各宮 分度界之起止始有落用之禍不備應如秤 鼓鐘羅經三盤內層地盤十二支統六十龍七十 ![](images/45_3.jpg) <center>Figure A22 | The markdown output for various types of vertical documents.</center> <--- Page Split ---> ## D.5. Table Recognition ![](images/46_0.jpg) ![](images/46_1.jpg) <center>Figure A23 | The markdown output for various types of Tables. </center> <--- Page Split ---> ![](images/47_0.jpg) <center>Photo Table</center> <table><tr><td>序号</td><td>深度/cm</td><td>橡皮膜朝向</td><td>压强计左右液面高度差/cm</td></tr><tr><td>1</td><td>5</td><td>朝上</td><td>4.9</td></tr><tr><td>2</td><td>5</td><td>朝下</td><td>4.9</td></tr><tr><td>3</td><td>5</td><td>朝侧面</td><td>4.9</td></tr><tr><td>4</td><td>10</td><td>朝侧面</td><td>9.7</td></tr><tr><td>5</td><td>15</td><td>朝侧面</td><td>14.6</td></tr></table> Figure A24 | The markdown output for various types of Tables. <--- Page Split ---> ## D.6. Formula Recognition # Complex Printed Expressions \[{\frac{6f^{\prime\prime}(x_{2})(\nu(\lambda_{1}^{2}-1+\theta^{2}f(x_{2})^{2})+f^{\prime}(x_{2})^{2}-1)}{f^{\prime}(x_{2})}-\frac{6\theta^{2}f(x_{2})(\lambda_{1}^{2}-1+\theta^{2}f(x_{2})^{2}+\nu(f^{\prime}(x_{2})^{2}-1))}{f^{\prime}(x_{2})}}\] \[{+\ldots+\frac{4H^{2}\lambda_{1}^{2}\theta^{2}(1-\nu)f^{\prime}(x_{2})f^{\prime\prime}(x_{2})}{\lambda_{1}^{2}+\theta^{2}f(x_{2})^{2}}-\frac{H^{2}\lambda_{1}^{2}\theta^{4}(1-\nu)f(x_{2})f^{\prime}(x_{2})(\lambda_{1}^{2}+\theta^{2}f(x_{2})^{2}+f^{\prime}(x_{2})^{2})}{(\lambda_{1}^{2}+\theta^{2}f(x_{2})^{2})^{2}}}\] \[{+\ldots+12f^{\prime}(x_{2})(\theta^{2}\nu f(x_{2})+f^{\prime\prime}(x_{2}))=0.}\] \[{\frac{6f^{\prime\prime}(x_{2})(\nu(\lambda_{1}^{2}-1+\theta^{2}f(x_{2})^{2})+f^{\prime}(x_{2})^{2}-1)}{f^{\prime}(x_{2})}-\frac{6\theta^{2}f(x_{2})(\lambda_{1}^{2}-1+\theta^{2}f(x_{2})^{2}+\nu(f^{\prime}(x_{2})^{2}-1))}{f^{\prime}(x_{2})}}\] \[{+\ldots+\frac{4H^{2}\lambda_{1}^{2}\theta^{2}(1-\nu)f^{\prime}(x_{2})f^{\prime\prime}(x_{2})}{\lambda_{1}^{2}+\theta^{2}f(x_{2})^{2}}-\frac{H^{2}\lambda_{1}^{2}\theta^{4}(1-\nu)f(x_{2})f^{\prime}(x_{2})(\lambda_{1}^{2}+\theta^{2}f(x_{2})^{2}+f^{\prime}(x_{2})^{2})}{(\lambda_{1}^{2}+\theta^{2}f(x_{2})^{2})^{2}}}\] \[{+\ldots+12f^{\prime}(x_{2})(\theta^{2}\nu f(x_{2})+f^{\prime\prime}(x_{2}))=0.}\] # Handwritten Expressions \[\therefore f[q(x)] = \left\{ \begin{array}{ll}1, & |x|\in [0,1)\cup [2, + \infty)\\ 0, & |x|\in [1,2) \end{array} \right.\] \[\therefore f[g(x)] = \left\{ \begin{array}{ll}1, & |x|\in [0,1)\cup [2, + \infty)\\ 0, & |x|\in [1,2) \end{array} \right.\] # Screen-Captured Expressions # Vertical Formula \[f(x) = 2x^{2} + 2x + 3,\] \[f(3) = 2\cdot 3^{2} + 2\cdot 3 + 3 = 27,\] \[f(a) = 2a^{2} + 2a + 3,\] \[f(0) = 0 + 0 + 3 = 3,\mathrm{etc.}\] \[f(x) = 2x^{2} + 2x + 3,\] \[f(3) = 2\cdot 3^{2} + 2\cdot 3 + 3 = 27,\] \[f(a) = 2a^{2} + 2a + 3,\] \[f(0) = 0 + 0 + 3 = 3,\mathrm{etc.}\] ![](images/48_0.jpg) Figure A25 | The markdown output for various types of Formulas. <--- Page Split ---> ![](images/49_0.jpg) ![](images/49_1.jpg) ![](images/49_2.jpg) <center>Figure A26 | The markdown output for various types of Formulas. </center> <--- Page Split ---> ## D.7. Chart Recognition ![](images/50_0.jpg) ![](images/50_1.jpg) <center>Hints that the model might use without verbalizing them </center> <table><tr><td></td><td>2017</td><td>2018</td><td>2019</td><td>2020</td><td>2021</td><td>2022</td><td>2023</td><td>2024</td></tr><tr><td>经营(%)</td><td>37%</td><td>34%</td><td>27%</td><td>19%</td><td>20%</td><td>20%</td><td>21%</td><td>21%</td></tr><tr><td>存货(%)</td><td>18%</td><td>20%</td><td>18%</td><td>16%</td><td>18%</td><td>18%</td><td>14%</td><td>16%</td></tr><tr><td>现金(%)</td><td>22%</td><td>27%</td><td>36%</td><td>43%</td><td>37%</td><td>33%</td><td>40%</td><td>42%</td></tr><tr><td>投资(%)</td><td>23%</td><td>19%</td><td>19%</td><td>22%</td><td>26%</td><td>29%</td><td>25%</td><td>22%</td></tr></table> ![](images/50_2.jpg) <center>Bar-Line Hybrid Chart </center> ![](images/50_3.jpg) ![](images/50_4.jpg) <center>Figure A27 | The markdown output for various types of Charts. </center> ![](images/50_5.jpg) <--- Page Split ---> ![](images/51_0.jpg) ![](images/51_1.jpg) ![](images/51_2.jpg) <table><tr><td>年份</td><td>爱尔眼科·净资产收益率(ROE)</td><td>医疗服务(申万)·净资产收益率(ROE)</td></tr><tr><td>2015</td><td>19%</td><td>13.25%</td></tr><tr><td>2016</td><td>20.86%</td><td>14.79%</td></tr><tr><td>2017</td><td>18.85%</td><td>7.25%</td></tr><tr><td>2018</td><td>18.63%</td><td>8.96%</td></tr><tr><td>2019</td><td>22.03%</td><td>-0.79%</td></tr><tr><td>2020</td><td>21.24%</td><td>11.70%</td></tr><tr><td>2021</td><td>21.59%</td><td>12.31%</td></tr><tr><td>2022</td><td>18.02%</td><td>12.43%</td></tr><tr><td>2023</td><td>19.43%</td><td>11.11%</td></tr><tr><td>2024</td><td>17.86%</td><td>7.86%</td></tr></table> ![](images/51_3.jpg) <center>Figure A28 | The markdown output for various types of Charts. </center> ![](images/51_4.jpg) <center>图16主要行业净资产收益率</center> <center>资料来源:Wind,财通证券研究所</center> <table><tr><td>行业</td><td>2024Q4(亿元)</td><td>2023Q4(亿元)</td><td>2022Q4(亿元)</td><td>2024年同比(%)</td></tr><tr><td>信息技术服务业</td><td>10000</td><td>5000</td><td>5000</td><td>65</td></tr><tr><td>金融业</td><td>50000</td><td>40000</td><td>35000</td><td>25</td></tr><tr><td>电热及水生产和供应业</td><td>95000</td><td>80000</td><td>65000</td><td>15</td></tr><tr><td>采矿业</td><td>20000</td><td>20000</td><td>15000</td><td>15</td></tr><tr><td>租赁、商业服务业与贸易业</td><td>200000</td><td>180000</td><td>150000</td><td>10</td></tr><tr><td>制造业</td><td>190000</td><td>170000</td><td>145000</td><td>10</td></tr><tr><td>建筑业</td><td>50000</td><td>45000</td><td>40000</td><td>10</td></tr><tr><td>批发和零售业</td><td>75000</td><td>70000</td><td>60000</td><td>10</td></tr><tr><td>交通运输</td><td>160000</td><td>145000</td><td>130000</td><td>10</td></tr><tr><td>水利、环境和公共设施管理业</td><td>90000</td><td>80000</td><td>70000</td><td>5</td></tr><tr><td>房地产业</td><td>95000</td><td>85000</td><td>80000</td><td>5</td></tr><tr><td>住宿和餐饮业</td><td>2000</td><td>2000</td><td>2000</td><td>-10</td></tr><tr><td>农林牧渔业</td><td>5000</td><td>5000</td><td>5000</td><td>-30</td></tr></table> <--- Page Split ---> ![](images/52_0.jpg) <center>Bar-Line Hybrid Chart </center> ![](images/52_1.jpg) <center>Figure A29 | The markdown output for various types of Charts. </center> <--- Page Split ---> ### E. Compare with Others PaddleOCR- VL showcases superior performance in scenarios involving PDF pages with complex layout, consistently outperforming existing state- of- the- art (SOTA) models. This is evident from Figures A30 and A31, which highlight its exceptional capability in handling pages with intricate layouts and unique elements, surpassing other solutions. Moreover, the model demonstrates exceptionally high recognition accuracy in several domains, including Multilingual Text Recognition, Handwriting Text Recognition, and Vertical Text Recognition. Figures A32- A37 illustrate how PaddleOCR- VL outperforms competitors such as MinerU2.5 [2] and MonkeyOCR [1], which tend to misidentify languages like Russian and Hindi as English, overlook some handwritten characters, and struggle with vertical text recognition. In dealing with complex tables, PaddleOCR- VL's parsing accuracy stands out, as evidenced by Figures A38 and A39. This is a domain where other models frequently encounter difficulties. Additionally, Figure A40 demonstrates PaddleOCR- VL's proficiency in accurately parsing complex formulas. In contrast, other SOTA models often produce incorrect or flawed outputs when faced with challenging mathematical notations. Finally, as depicted in Figures A41 and A42, PaddleOCR- VL also excels in Chart Recognition. It outperforms multi- modal large language models like Qwen2.5VL- 72B [24] and GPT- 4o by accurately reconstructing the structure and content of charts. <--- Page Split ---> ![](images/54_0.jpg) <center>Figure A30 | Compare with others in Layout Detection. </center> <--- Page Split ---> ![](images/55_0.jpg) <center>Figure A31 | Compare with others in Layout Detection.</center> <--- Page Split ---> ## E.2. Text Recognition ### E.2.1. Multilingual Text Recognition ![](images/56_0.jpg) <center>Figure A32 | Compare with others in Multilingual Text Recognition.</center> <--- Page Split ---> ## E.2.2. Handwriting Text Recognition ![](images/57_0.jpg) ## PaddleOCR-VL 致每位同伴亚洲巡演的SWITH SWITH你好- 亚洲巡演结束后怀着感激的心情写下了这封信。首先有了SWITH才能够顺利结束首次亚洲巡演。真的感谢你们一直在一起,一直来支持我们,感到很安心。在每个演唱会,粉丝签名会和者其他日程中,都能够感受到每位SWITH的温暖的真心。虽然时间较短但是能够见到你们真是个幸运。读者SWITH的每封信让我思考了很多。每个认真写下的韩文都很可爱,很多人说自己的韩语不好很抱歉,但其实完全不需要道歉!并且想对大家说封信我都读了都很感谢。而且我决定一定要给你们写回一封信!感谢你们祝我安全飞行,愉快的一天感谢你们给我问好。虽然因为日程繁忙无法体验所有的东西,但感谢你们为我推荐好看的,好吃的和能享受的东西。最重要的是感谢你们一直相信我,等待着,而且会跟我们在以后的前途会在一起。尽管我们的国籍不同,语言和文化也有差异,但通过音乐 ## MinerU2.5 ## MonkeyOCR 亚洲巡演结束后怀着感激的心情写下了这封信。首先有了SWITH才能够顺利结束首次亚洲巡演。真的感谢你们一直在一起,一直来支持我们,感到很安心。在每个演唱会,粉丝签名会和者其他日程中,都能够感受到每位SWITH的温暖的真心,虽然时间较短但是能够见到你们真是个幸运。读者SWITH的每封信让我思考了很多。每个认真写下的韩文都很可爱,很多人说自己的韩语不好很抱歉,但其实完全不需要道歉!并且想对大家说封信我都读了都很感谢,而且我决定一定要给你们写回一封信!感谢你们祝我安全飞行,愉快的一天感谢你们给我问好。虽然因为日程繁忙无法体验所有的东西,但感谢你们为我推荐好看的,好吃的和能享受的东西。最重要的是感谢你们一直相信我,等待着,而且会跟我们在以后的前途会在一起。尽管我们的国籍不同,语言和文化也有差异,但通过音乐 亚洲巡演结束后怀着感激的心情写下了这封信。首先有了SWITH才能够顺利结束首次亚洲巡演。真的感谢你们一直在一起,一直来支持我们,感到很安心。在每个演唱会,粉丝签名会和者其他日程中,都能够感受到每位SWITH的温暖的真心。虽然时间较短但是能够见到你们真是个幸运。读者SWITH的每封信让我思考了很多。每个认真写下的韩文都很可爱,很多人说自己的韩语不好很抱歉,但其实完全不需要道歉!并且想对大家说封信我都读了都很感谢,而且我决定一定要给你们写回一封信!感谢你们祝我安全飞行,愉快的一天感谢你们给我问好。虽然因为日程繁忙无法体验所有的东西,但感谢你们为我推荐好看的,好吃的和能享受的东西。最重要的是感谢你们一直相信我,等待着,而且会跟我们在以后的前途会在一起。尽管我们的国籍不同,语言和文化也有差异,但通过音乐 Missing Text Missing Text Figure A35 | Compare with others in Handwriting Text Recognition. <--- Page Split ---> ![](images/58_0.jpg) <center>Figure A36 | Compare with others in Handwriting Text Recognition.</center> <--- Page Split ---> ![](images/59_0.jpg) <center>E.2.3. Vertical Text Recognition</center> ![](images/59_1.jpg) <center>Figure A37 | Compare with others in Vertical Text Recognition. </center> <--- Page Split ---> ## E.3. Table Recognition ![](images/60_0.jpg) <center>Figure A38 | Compare with others in Table Recognition. </center> <--- Page Split ---> ![](images/61_0.jpg) <center>Figure A39 | Compare with others in Table Recognition.</center> <--- Page Split ---> ![](images/62_0.jpg) <center>Figure A40 | Compare with others in Formula Recognition. </center> <--- Page Split ---> ## E.5. Chart Recognition ![](images/63_0.jpg) <center>Figure A41 | Compare with others in Chart Recognition. </center> <--- Page Split ---> ![](images/64_0.jpg) <center>Figure A42 | Compare with others in Chart Recognition. </center> <--- Page Split --->
2510.14528v1
"\n# PaddleOCR-VL: Boosting Multilingual Document Parsing via a 0.9B Ultra-Compact Vision-Language M(...TRUNCATED)
2510.14528v1
"\n# PaddleOCR-VL: Boosting Multilingual Document Parsing via a 0.9B Ultra-Compact Vision-Language M(...TRUNCATED)
2510.14528v1
"\n# PaddleOCR-VL: Boosting Multilingual Document Parsing via a 0.9B Ultra-Compact Vision-Language M(...TRUNCATED)
2510.14528v1
"\n# PaddleOCR-VL: Boosting Multilingual Document Parsing via a 0.9B Ultra-Compact Vision-Language M(...TRUNCATED)
2510.14528v1
"\n# PaddleOCR-VL: Boosting Multilingual Document Parsing via a 0.9B Ultra-Compact Vision-Language M(...TRUNCATED)
2510.14528v1
"\n# PaddleOCR-VL: Boosting Multilingual Document Parsing via a 0.9B Ultra-Compact Vision-Language M(...TRUNCATED)
2510.14528v1
"\n# PaddleOCR-VL: Boosting Multilingual Document Parsing via a 0.9B Ultra-Compact Vision-Language M(...TRUNCATED)
2510.14528v1
"\n# PaddleOCR-VL: Boosting Multilingual Document Parsing via a 0.9B Ultra-Compact Vision-Language M(...TRUNCATED)
2510.14528v1
"\n# PaddleOCR-VL: Boosting Multilingual Document Parsing via a 0.9B Ultra-Compact Vision-Language M(...TRUNCATED)
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
20