# NLP - Natural Language Processing This software implements a heavily parallelized pipeline for Natural Language Processing of text files. It is used for nopaque's NLP service but you can also use it standalone, for that purpose a convenient wrapper script is provided. ## Software used in this pipeline implementation - Official Debian Docker image (buster-slim) and programs from its free repositories: https://hub.docker.com/_/debian - pyFlow (1.1.20): https://github.com/Illumina/pyflow/releases/tag/v1.1.20 - spaCy (3.0.3): https://github.com/tesseract-ocr/tesseract/releases/tag/4.1.1 - spaCy medium sized models (3.0.0): - https://github.com/explosion/spacy-models/releases/tag/da_core_news_md-3.0.0 - https://github.com/explosion/spacy-models/releases/tag/de_core_news_md-3.0.0 - https://github.com/explosion/spacy-models/releases/tag/el_core_news_md-3.0.0 - https://github.com/explosion/spacy-models/releases/tag/en_core_web_md-3.0.0 - https://github.com/explosion/spacy-models/releases/tag/es_core_news_md-3.0.0 - https://github.com/explosion/spacy-models/releases/tag/fr_core_news_md-3.0.0 - https://github.com/explosion/spacy-models/releases/tag/it_core_news_md-3.0.0 - https://github.com/explosion/spacy-models/releases/tag/nl_core_news_md-3.0.0 - https://github.com/explosion/spacy-models/releases/tag/pt_core_news_md-3.0.0 - https://github.com/explosion/spacy-models/releases/tag/ru_core_news_md-3.0.0 - https://github.com/explosion/spacy-models/releases/tag/zh_core_web_md-3.0.0 ## Use this image 1. Create input and output directories for the pipeline. ``` bash mkdir -p //input //output ``` 2. Place your text files inside `//input`. Files should all contain text of the same language. 3. Start the pipeline process. Check the [Pipeline arguments](#pipeline-arguments) section for more details. ``` # Option one: Use the wrapper script ## Install the wrapper script (only on first run). Get it from https://gitlab.ub.uni-bielefeld.de/sfb1288inf/nlp/-/raw/1.0.0/wrapper/nlp, make it executeable and add it to your ${PATH} cd / nlp -i input -l -o output # Option two: Classic Docker style docker run \ --rm \ -it \ -u $(id -u $USER):$(id -g $USER) \ -v //input:/input \ -v //output:/output \ gitlab.ub.uni-bielefeld.de:4567/sfb1288inf/nlp:1.0.0 \ -i /input \ -l -o /output \ ``` 4. Check your results in the `//output` directory. ``` ### Pipeline arguments `--check-encoding` * If set, the pipeline tries to automatically determine the right encoding for your texts. Only use it if you are not sure that your input is provided in UTF-8. * default = False * required = False `-l languagecode` * Tells spaCy which language will be used. * options = da (Danish), de (German), el (Greek), en (English), es (Spanish), fr (French), it (Italian), nl (Dutch), pt (Portuguese), ru (Russian), zh (Chinese) * required = True `--nCores corenumber` * Sets the number of CPU cores being used during the NLP process. * default = min(4, multiprocessing.cpu_count()) * required = False ``` bash # Example with all arguments used docker run \ --rm \ -it \ -u $(id -u $USER):$(id -g $USER) \ -v "$HOME"/ocr/input:/input \ -v "$HOME"/ocr/output:/output \ gitlab.ub.uni-bielefeld.de:4567/sfb1288inf/nlp:1.0.0 \ -i /input \ -l en \ -o /output \ --check-encoding \ --nCores 8 \ ```