Skip to content

OCR Hinders RAG: Evaluating the Cascading Impact of OCR on Retrieval-Augmented Generation

Notifications You must be signed in to change notification settings

opendatalab/OHR-Bench

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OCR Hinders RAG: Evaluating the Cascading Impact of OCR on Retrieval-Augmented Generation

framework

This repository contains the official code of OHR-Bench, a benchmark designed to evaluate the cascading impact of OCR on RAG.

Overview

  • PDF, gt structured data and Q&A datasets: [🤗 Hugging Face] pdfs.zip, data/retrieval_base/gt, data/qas_v2.json. It includes 8500+ unstructured PDF pages from 7 domains, including Textbook, Law, Finance, Newspaper, Manual, Academic and Administration and 8498 Q&A datasets sourced from 5 key components for OCR in document parsing, including plain text, table, formula, chart and reading order. Each PDF page is equipped with a human-verified ground truth structured data.
  • Perturbed data with OCR errors: [🤗 Hugging Face] formatting_noise_[mild/moderate/severe] and semantic_noise_[GOT/MinerU/Qwen2.5-VL-72B]_[mild/moderate/severe]. In order to conduct in-depth analysis of the OCR's impact on RAG, OHR-Bench identifies Semantic Noise and Formatting Noise and introduce them with mild, moderate and severe perturbation based on real-world OCR errors.
  • Evaluation framework: [Github opendatalab/OHR-Bench]. We provide a RAG evaluation framework to assess the impact of OCR processed structured data and our perturbed data on RAG including retrieval, generation and overall performance.

Evaluation Results

OCR Retrieval Generation Overall
E.D. TXT TAB FOR CHA RO ALL TXT TAB FOR CHA RO ALL TXT TAB FOR CHA RO ALL
Ground Truth - 81.2 69.6 74.8 70.3 9.8 70.0 49.4 46.0 34.0 47.0 28.2 43.9 45.0 34.6 28.0 32.9 18.7 36.1
Pipeline-based OCR
MinerU 0.24 67.7 48.5 51.1 16.5 5.9 50.1 45.9 39.3 28.6 9.7 29.5 36.7 41.4 28.5 23.0 9.3 17.8 30.0
Marker 0.28 75.2 57.8 55.4 19.7 5.9 56.6 44.5 37.8 27.8 10.9 26.2 35.9 40.1 28.1 22.3 10.0 16.2 29.5
End-to-end OCR
GOT 0.27 62.1 41.0 48.7 17.4 3.7 45.4 37.5 28.5 24.1 8.5 7.1 27.8 35.3 22.9 20.1 8.2 5.3 24.6
Nougat 0.34 59.1 32.7 44.2 11.3 4.4 40.9 36.7 22.9 22.9 6.4 6.9 25.5 33.5 18.4 19.4 5.8 3.6 14.5
Vision-Language Model for OCR
Qwen2.5-VL-72B 0.18 74.6 59.8 59.7 38.2 5.3 59.2 44.4 42.1 31.8 27.0 11.6 37.5 40.6 31.1 26.1 19.0 8.8 31.1
InternVL2.5-78B 0.28 68.2 57.7 55.3 45.1 2.7 55.8 41.8 41.8 29.0 33.6 3.3 35.8 38.2 31.0 23.3 22.9 3.1 29.6

We evaluate the suitability of current OCR solutions for real-world RAG applications by conducting comprehensive experiments with our OHR-Bench. We report the generalized LCS or F1 of five types of evidence sources, including plain text (TXT), table (TAB), formula (FOR), chart (CHA), and reading order (RO).

We derive conclusions as follows:

  • VLMs for OCR achieve the best overall performance. Employing Qwen2.5-VL-72B achieves the best performance across all OCR solutions.
  • All OCR solutions suffer performance degradation. Even the best solutions show a decrease of 14% F1-score in the overall evaluation, with greater losses in the retrieval and generation stages.

Getting Started

Installation

pip install -r requirements.txt

Dataset preparation

OCR processed structured data

To evaluate your RAG system on our benchmark, follow these steps:

  1. Download Perturbed Data: Get the data with formatting and semantic noise from the zip file in Hugging Face and unzip it. Or use the load_dataset ("opendatalab/OHR-Bench") to get the relevant fields.
  2. Organize the Data: Place the folders retrieval_base/formatting_noise_[mild/moderate/severe] and retrieval_base/semantic_noise_[GOT/MinerU/Qwen2.5-VL-72B]_[mild/moderate/severe] in the data/retrieval_base directory of this project.
  3. Run Evaluation: Follow the instructions in Run Evaluation.

To evaluate your OCR results using this benchmark:

  1. Organize the Data: Do OCR with your OCR models (PDFs available on Hugging Face) and place the OCR processed structured data in the data/retrieval_base directory. Use the ground truth (data/retrieval_base/gt) data as an example. The sub-folder names indicate the domain of the parsed results, and each JSON file, named as the same of corresponding PDF files, should contain the corresponding parsed results.
  2. Run Evaluation: Follow the instructions in Run Evaluation.
Directory Structure
retrieval_base/gt/ # We provide gt and MinerU processed structured data as illustration here
├── finance # Domain
│   ├── 3M_2023Q2_10Q.json # Parsed results
│   ├── ...
├── textbook
...
OCR Processed Data
[
    {
        "page_idx": 0, // Page index
        "text": "...", // OCR processed structured data
    },
    ...
]

QA data

The qa data is placed in data/qas_v2.json. Each JSON file should be structured as follows:

Q&A JSON
[
    {
        "doc_name": "finance/JPMORGAN_2021Q1_10Q", // Document source
        "ID": "00073cc2-c801-467c-9039-fca63c78c6a9", // Unique ID
        "questions": "What was the total amount of nonaccrual loans retained as of March 31, 2021?",
        "answers": "842",
        "doc_type": "finance", // Q&A domain.
        "answer_form": "Numeric", // Answer format.
        "evidence_source": "table", // Evidence source.
        "evidence_context": "Nonaccrual loans retained $^{(\\mathrm{a})}$ & \\$ & 842 & \\$ & 689 & $22 \\%$", // Evidence.
        "evidence_page_no": 24
    },
    ...
]

LLMs preparation

In src/configs, configure your local LLM path or GPT API.

GPT_api_key = 'You KEY Here'  # openai.api_key
...
Qwen2_7B_local_path = 'Qwen/Qwen2-7B-Instruct' # download from Hugging Face or your local path

Run Evaluation

To evaluate your OCR results, follow the instructions in the Dataset Preparation section to organize your OCR data.

# The first argument specifies which OCR results to use for evaluation.
# The second argument specifies the retrievers or LLMs.

# Args: Document source, LLM
# Generation with gt
bash shell/generation.sh gt qwen2_7b
# Generation with mild semantic noise usi (OCR=MinerU)
bash shell/generation.sh semantic_noise_MinerU_mild qwen2_7b

# Args: Document source, retriver
# Retrieval with gt
bash shell/retrieval.sh gt bge-m3
# Retrieval with moderate semantic noise (OCR=MinerU)
bash shell/retrieval.sh semantic_noise_MinerU_moderate bge-m3

# Args: Document source, retriver, LLM
# End-to-end with gt
bash shell/end2end.sh gt bge-m3 qwen2_7b
# End-to-end with severe semantic noise (OCR=MinerU)
bash shell/end2end.sh semantic_noise_MinerU_severe bge-m3 qwen2_7b

Acknowledgement

The evaluation framework is based on CRUD, thanks so much for this brilliant project.

Citation

@article{zhang2024ocr,
  title={OCR Hinders RAG: Evaluating the Cascading Impact of OCR on Retrieval-Augmented Generation},
  author={Junyuan Zhang and Qintong Zhang and Bin Wang and Linke Ouyang and Zichen Wen and Ying Li and Ka-Ho Chow and Conghui He and Wentao Zhang},
  journal={arXiv preprint arXiv:2412.02592},
  year={2024}
}

Copyright Statement

The PDFs are collected from public online channels and community user contributions. Content that is not allowed for distribution has been removed. The dataset is for research purposes only and not for commercial use. If there are any copyright concerns, please contact [email protected].

About

OCR Hinders RAG: Evaluating the Cascading Impact of OCR on Retrieval-Augmented Generation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •