nlp/README.md

89 lines
3.6 KiB
Markdown
Raw Normal View History

2021-02-25 10:26:11 +00:00
# NLP - Natural Language Processing
This software implements a heavily parallelized pipeline for Natural Language Processing of text files. It is used for nopaque's NLP service but you can also use it standalone, for that purpose a convenient wrapper script is provided.
## Software used in this pipeline implementation
- Official Debian Docker image (buster-slim) and programs from its free repositories: https://hub.docker.com/_/debian
- pyFlow (1.1.20): https://github.com/Illumina/pyflow/releases/tag/v1.1.20
- spaCy (3.0.3): https://github.com/tesseract-ocr/tesseract/releases/tag/4.1.1
- spaCy medium sized models (3.0.0):
- https://github.com/explosion/spacy-models/releases/tag/da_core_news_md-3.0.0
- https://github.com/explosion/spacy-models/releases/tag/de_core_news_md-3.0.0
- https://github.com/explosion/spacy-models/releases/tag/el_core_news_md-3.0.0
- https://github.com/explosion/spacy-models/releases/tag/en_core_web_md-3.0.0
- https://github.com/explosion/spacy-models/releases/tag/es_core_news_md-3.0.0
- https://github.com/explosion/spacy-models/releases/tag/fr_core_news_md-3.0.0
- https://github.com/explosion/spacy-models/releases/tag/it_core_news_md-3.0.0
- https://github.com/explosion/spacy-models/releases/tag/nl_core_news_md-3.0.0
- https://github.com/explosion/spacy-models/releases/tag/pt_core_news_md-3.0.0
- https://github.com/explosion/spacy-models/releases/tag/ru_core_news_md-3.0.0
- https://github.com/explosion/spacy-models/releases/tag/zh_core_web_md-3.0.0
## Use this image
1. Create input and output directories for the pipeline.
``` bash
mkdir -p /<my_data_location>/input /<my_data_location>/output
2019-03-27 08:40:22 +00:00
```
2021-02-25 10:26:11 +00:00
2. Place your text files inside `/<my_data_location>/input`. Files should all contain text of the same language.
2019-03-27 08:40:22 +00:00
2021-02-25 10:26:11 +00:00
3. Start the pipeline process. Check the [Pipeline arguments](#pipeline-arguments) section for more details.
2019-03-27 08:40:22 +00:00
```
2021-02-25 10:26:11 +00:00
# Option one: Use the wrapper script
## Install the wrapper script (only on first run). Get it from https://gitlab.ub.uni-bielefeld.de/sfb1288inf/nlp/-/raw/1.0.0/wrapper/nlp, make it executeable and add it to your ${PATH}
cd /<my_data_location>
nlp -i input -l <language_code> -o output <optional_pipeline_arguments>
2019-05-20 10:08:13 +00:00
2021-02-25 10:26:11 +00:00
# Option two: Classic Docker style
2019-05-20 10:08:13 +00:00
docker run \
--rm \
-it \
2019-06-03 11:32:37 +00:00
-u $(id -u $USER):$(id -g $USER) \
2021-02-25 10:26:11 +00:00
-v /<my_data_location>/input:/input \
-v /<my_data_location>/output:/output \
gitlab.ub.uni-bielefeld.de:4567/sfb1288inf/nlp:1.0.0 \
2019-06-02 19:38:47 +00:00
-i /input \
2021-02-25 10:26:11 +00:00
-l <language_code>
-o /output \
<optional_pipeline_arguments>
2019-05-20 10:08:13 +00:00
```
2021-02-25 10:26:11 +00:00
4. Check your results in the `/<my_data_location>/output` directory.
```
2019-05-20 10:08:13 +00:00
2021-02-25 10:26:11 +00:00
### Pipeline arguments
2019-03-27 08:40:22 +00:00
2021-02-25 10:26:11 +00:00
`--check-encoding`
* If set, the pipeline tries to automatically determine the right encoding for
your texts. Only use it if you are not sure that your input is provided in UTF-8.
* default = False
* required = False
2019-03-27 08:40:22 +00:00
2019-05-20 10:08:13 +00:00
`-l languagecode`
* Tells spaCy which language will be used.
2021-02-25 10:26:11 +00:00
* options = da (Danish), de (German), el (Greek), en (English), es (Spanish), fr (French), it (Italian), nl (Dutch), pt (Portuguese), ru (Russian), zh (Chinese)
2019-05-20 10:08:13 +00:00
* required = True
2021-02-25 10:26:11 +00:00
`--nCores corenumber`
* Sets the number of CPU cores being used during the NLP process.
* default = min(4, multiprocessing.cpu_count())
* required = False
``` bash
# Example with all arguments used
docker run \
--rm \
-it \
-u $(id -u $USER):$(id -g $USER) \
-v "$HOME"/ocr/input:/input \
-v "$HOME"/ocr/output:/output \
gitlab.ub.uni-bielefeld.de:4567/sfb1288inf/nlp:1.0.0 \
-i /input \
-l en \
-o /output \
--check-encoding \
--nCores 8 \
```