Doc digitization has lengthy been a multi-stage drawback: first detect the structure, then extract the textual content, and at last attempt to reconstruct the construction. For Giant Imaginative and prescient-Language Fashions (LVLMs), this usually results in ‘structural hallucinations’—disordered rows, invented formulation, or unclosed syntax.
The FireRedTeam has launched FireRed-OCR-2B, a flagship mannequin designed to deal with doc parsing as a structural engineering job reasonably than ‘impressionist’ textual content technology. Constructed on the Qwen3-VL-2B-Instruct structure, this mannequin establishes a brand new State-of-the-Artwork (SOTA) for end-to-end options, reaching an total rating of 92.94% on the OmniDocBench v1.5 benchmark.
Shifting the Paradigm: Structural Engineering vs. Textual content Era
Devs usually discover that even probably the most highly effective common VLMs wrestle with the dense spatial logic of a technical PDF. When a mannequin ‘sees’ a posh desk or a multi-line LaTeX equation, it continuously fails to keep up the hierarchical relationship between parts.
FireRed-OCR-2B addresses this via a specialised Progressive Coaching Pipeline consisting of three distinct levels:
- Multi-task Pre-alignment: This stage establishes spatial grounding by coaching the mannequin on detection, area recognition, and layout-to-markdown duties.
- Specialised SFT (Supervised Effective-Tuning): The mannequin is fine-tuned on a high-quality, standardized Markdown dataset to make sure logical consistency and hierarchical expression.
- Format-Constrained GRPO: The ultimate stage makes use of reinforcement studying to implement syntactic validity.
The Core Innovation: Format-Constrained GRPO
Essentially the most vital technical differentiator for FireRed-OCR is its use of Format-Constrained Group Relative Coverage Optimization (GRPO). Whereas conventional fine-tuning focuses on character accuracy, GRPO introduces a reinforcement studying loop that rewards the mannequin for particular structural traits:
- System Syntax: Guaranteeing LaTeX equations are mathematically legitimate.
- Desk Integrity: Sustaining constant row/column counts and correct HTML/Markdown tagging.
- Hierarchical Closure: Verifying that every one opened structural tags (like lists or headers) are accurately closed.
- Textual content Accuracy: Decreasing character-level errors in dense textual content blocks.
By eliminating the necessity for a separate ‘critic’ mannequin—a key good thing about the GRPO algorithm—FireRedTeam has optimized the coaching course of to focus particularly on the high-friction areas of doc parsing.
Fixing the Lengthy-Tail Format Downside
The ‘long-tail’ of doc layouts (e.g., non-standard authorized types, educational papers with overlapping figures, or handwritten annotations) is the place most OCR pipelines break. FireRed-OCR makes use of a ‘Geometry + Semantics’ Knowledge Manufacturing facility.
This novel strategy makes use of geometric characteristic clustering and multi-dimensional tagging to synthesize balanced datasets. By combining geometric consciousness with semantic understanding, the mannequin maintains ‘In-the-Wild Robustness,’ outperforming conventional pipeline methods like PaddleOCR on advanced, non-standard layouts (benchmarked on the FireRedBench dataset).
Efficiency Benchmarks
In head-to-head comparisons on OmniDocBench v1.5, FireRed-OCR-2B (92.94%) considerably outperforms different end-to-end fashions, together with:
- DeepSeek-OCR 2: 91.09%
- Gemini-3.0 Professional: 90.33%
- Qwen3-VL-235B: 89.15%
Whereas some ‘pipeline’ options (which use separate fashions for detection and recognition) obtain barely larger scores, FireRed-OCR-2B represents the main efficiency for a single-model, end-to-end strategy. That is notably related for devs seeking to cut back system complexity and inference latency in manufacturing RAG (Retrieval-Augmented Era) environments.
Key Takeaways
I’ve summarized the technical significance and efficiency metrics of the FireRed-OCR-2B launch into 5 key takeaways for AI engineers and information scientists.
5 Key Takeaways: FireRed-OCR-2B
- New Finish-to-Finish SOTA Efficiency: FireRed-OCR-2B has achieved a state-of-the-art (SOTA) rating of 92.94% on the OmniDocBench v1.5 benchmark. This makes it the main single-model resolution for doc parsing, outperforming considerably bigger fashions like Qwen2-VL-72B and Gemini-1.5-Professional in structural accuracy.
- Architectural Basis: Constructed on the Qwen2-VL-2B-Instruct (or the up to date 2026 iteration) base, the mannequin makes use of a Imaginative and prescient-Language-Mannequin (VLM) strategy. It replaces conventional multi-stage pipelines (separate detection, cropping, and OCR steps) with a unified, end-to-end transformer structure that outputs structured Markdown instantly.
- Structural Integrity through GRPO: A significant technical differentiator is the usage of Format-Constrained GRPO (Group Relative Coverage Optimization). This reinforcement studying method rewards the mannequin for sustaining syntactic validity—particularly making certain that LaTeX formulation, desk tags, and Markdown hierarchies are logically closed and mathematically constant.
- ‘Geometry + Semantics’ Knowledge Manufacturing facility: To resolve the issue of advanced ‘in-the-wild’ layouts, the FireRedTeam developed a specialised information engine. This ‘manufacturing unit’ synthesizes datasets by balancing geometric structure options with semantic content material, enabling the mannequin to deal with overlapping figures, multi-column educational papers, and non-standard types extra reliably than earlier iterations.
Try the Mannequin Weight and Repo. Additionally, be happy to comply with us on Twitter and don’t overlook to affix our 120k+ ML SubReddit and Subscribe to our E-newsletter. Wait! are you on telegram? now you may be a part of us on telegram as nicely.

