2021-02-19 12:04:03 +00:00
# OCR - Optical Character Recognition
2019-04-02 13:43:41 +00:00
2022-01-04 10:42:55 +00:00
This software implements a heavily parallelized pipeline to recognize text in PDF files. It is used for nopaque's OCR service but you can also use it standalone, for that purpose a convenient wrapper script is provided. The pipeline is designed to run on Linux operating systems, but with some tweaks it should also run on Windows with WSL installed.
2019-04-02 13:43:41 +00:00
2021-02-19 12:04:03 +00:00
## Software used in this pipeline implementation
2021-03-15 11:45:05 +00:00
- Official Debian Docker image (buster-slim): https://hub.docker.com/_/debian
- Software from Debian Buster's free repositories
2021-02-19 12:04:03 +00:00
- ocropy (1.3.3): https://github.com/ocropus/ocropy/releases/tag/v1.3.3
- pyFlow (1.1.20): https://github.com/Illumina/pyflow/releases/tag/v1.1.20
2022-01-04 10:42:55 +00:00
- Tesseract OCR (5.0.0): https://github.com/tesseract-ocr/tesseract/releases/tag/5.0.0
2019-05-16 11:22:29 +00:00
2022-01-04 10:42:55 +00:00
## Installation
2019-05-16 11:22:29 +00:00
2022-01-04 10:42:55 +00:00
1. Install Docker and Python 3.
2. Clone this repository: `git clone https://gitlab.ub.uni-bielefeld.de/sfb1288inf/ocr.git`
2022-01-18 12:46:52 +00:00
3. Build the Docker image: `docker build -t gitlab.ub.uni-bielefeld.de:4567/sfb1288inf/ocr:v0.1.0 ocr`
4. Add the wrapper script (`wrapper/ocr` relative to this README file) to your `${PATH}` .
5. Create working directories for the pipeline: `mkdir -p /<my_data_location>/{input,models,output}` .
6. Place your Tesseract OCR model(s) inside `/<my_data_location>/models` .
2019-04-02 13:43:41 +00:00
2022-01-04 10:42:55 +00:00
## Use the Pipeline
2019-04-02 13:43:41 +00:00
2022-01-04 10:42:55 +00:00
1. Place your PDF files inside `/<my_data_location>/input` . Files should all contain text of the same language.
2. Clear your `/<my_data_location>/output` directory.
2021-03-26 09:03:59 +00:00
3. Start the pipeline process. Check the pipeline help (`ocr --help`) for more details.
2022-01-04 10:42:55 +00:00
```bash
2021-02-19 12:04:03 +00:00
cd /< my_data_location >
2022-01-27 12:40:23 +00:00
# <model_code> is the model filename without the ".traineddata" suffix
ocr \
--input-dir input \
--output-dir output \
--model-file models/< model >
-m < model_code > < optional_pipeline_arguments >
# More then one model
ocr \
--input-dir input \
--output-dir output \
--model-file models/< model1 >
--model-file models/< model2 >
-m < model1_code > +< model2_code > < optional_pipeline_arguments >
# Instead of multiple --model-file statements, you can also use
ocr \
--input-dir input \
--output-dir output \
--model-file models/*
-m < model1_code > +< model2_code > < optional_pipeline_arguments >
2019-04-02 13:43:41 +00:00
```
2021-02-19 12:04:03 +00:00
4. Check your results in the `/<my_data_location>/output` directory.