Go to file
2021-07-13 16:31:53 +02:00
packages/stand-off-data-py Preliminary work 2021-07-13 16:31:53 +02:00
wrapper Update file handling. Now md5 is correct 2021-05-18 10:26:03 +02:00
.gitlab-ci.yml Use JSON files for stand-off annotations. 2021-03-26 09:46:17 +01:00
Dockerfile Use JSON files for stand-off annotations. 2021-03-26 09:46:17 +01:00
nlp Preliminary work 2021-07-13 16:31:53 +02:00
README.md Update README and pipeline help 2021-03-26 10:01:51 +01:00
spacy-nlp Preliminary work 2021-07-13 16:31:53 +02:00
vrt-creator Update file handling. Now md5 is correct 2021-05-18 10:26:03 +02:00

NLP - Natural Language Processing

This software implements a heavily parallelized pipeline for Natural Language Processing of text files. It is used for nopaque's NLP service but you can also use it standalone, for that purpose a convenient wrapper script is provided.

Software used in this pipeline implementation

Use this image

  1. Create input and output directories for the pipeline.
mkdir -p /<my_data_location>/input /<my_data_location>/output
  1. Place your text files inside /<my_data_location>/input. Files should all contain text of the same language.

  2. Start the pipeline process. Check the pipeline help (nlp --help) for more details.

# Option one: Use the wrapper script
## Install the wrapper script (only on first run). Get it from https://gitlab.ub.uni-bielefeld.de/sfb1288inf/nlp/-/raw/1.0.0/wrapper/nlp, make it executeable and add it to your ${PATH}
cd /<my_data_location>
nlp -i input -l <language_code> -o output <optional_pipeline_arguments>

# Option two: Classic Docker style
docker run \
    --rm \
    -it \
    -u $(id -u $USER):$(id -g $USER) \
    -v /<my_data_location>/input:/input \
    -v /<my_data_location>/output:/output \
    gitlab.ub.uni-bielefeld.de:4567/sfb1288inf/nlp:1.0.0 \
        -i /input \
        -l <language_code>
        -o /output \
        <optional_pipeline_arguments>
  1. Check your results in the /<my_data_location>/output directory.