Go to file
2021-02-25 11:26:11 +01:00
wrapper Fix version 1.0.0 2021-02-25 11:26:11 +01:00
.gitlab-ci.yml Update CI script 2020-10-07 17:09:09 +02:00
Dockerfile Fix version 1.0.0 2021-02-25 11:26:11 +01:00
nlp Fix version 1.0.0 2021-02-25 11:26:11 +01:00
README.md Fix version 1.0.0 2021-02-25 11:26:11 +01:00
spacy-nlp Fix version 1.0.0 2021-02-25 11:26:11 +01:00

NLP - Natural Language Processing

This software implements a heavily parallelized pipeline for Natural Language Processing of text files. It is used for nopaque's NLP service but you can also use it standalone, for that purpose a convenient wrapper script is provided.

Software used in this pipeline implementation

Use this image

  1. Create input and output directories for the pipeline.
mkdir -p /<my_data_location>/input /<my_data_location>/output
  1. Place your text files inside /<my_data_location>/input. Files should all contain text of the same language.

  2. Start the pipeline process. Check the Pipeline arguments section for more details.

# Option one: Use the wrapper script
## Install the wrapper script (only on first run). Get it from https://gitlab.ub.uni-bielefeld.de/sfb1288inf/nlp/-/raw/1.0.0/wrapper/nlp, make it executeable and add it to your ${PATH}
cd /<my_data_location>
nlp -i input -l <language_code> -o output <optional_pipeline_arguments>

# Option two: Classic Docker style
docker run \
    --rm \
    -it \
    -u $(id -u $USER):$(id -g $USER) \
    -v /<my_data_location>/input:/input \
    -v /<my_data_location>/output:/output \
    gitlab.ub.uni-bielefeld.de:4567/sfb1288inf/nlp:1.0.0 \
        -i /input \
        -l <language_code>
        -o /output \
        <optional_pipeline_arguments>
  1. Check your results in the /<my_data_location>/output directory.

### Pipeline arguments

`--check-encoding`
* If set, the pipeline tries to automatically determine the right encoding for
your texts. Only use it if you are not sure that your input is provided in UTF-8.
* default = False
* required = False

`-l languagecode`
* Tells spaCy which language will be used.
* options = da (Danish), de (German), el (Greek), en (English), es (Spanish), fr (French), it (Italian), nl (Dutch), pt (Portuguese), ru (Russian), zh (Chinese)
* required = True

`--nCores corenumber`
* Sets the number of CPU cores being used during the NLP process.
* default = min(4, multiprocessing.cpu_count())
* required = False

``` bash
# Example with all arguments used
docker run \
    --rm \
    -it \
    -u $(id -u $USER):$(id -g $USER) \
    -v "$HOME"/ocr/input:/input \
    -v "$HOME"/ocr/output:/output \
    gitlab.ub.uni-bielefeld.de:4567/sfb1288inf/nlp:1.0.0 \
        -i /input \
        -l en \
        -o /output \
        --check-encoding \
        --nCores 8 \