diff --git a/README.md b/README.md index 8c522ed..c932dcc 100644 --- a/README.md +++ b/README.md @@ -5,18 +5,13 @@ This software implements a heavily parallelized pipeline for Natural Language Pr ## Software used in this pipeline implementation - Official Debian Docker image (buster-slim) and programs from its free repositories: https://hub.docker.com/_/debian - pyFlow (1.1.20): https://github.com/Illumina/pyflow/releases/tag/v1.1.20 -- spaCy (3.0.3): https://github.com/tesseract-ocr/tesseract/releases/tag/4.1.1 +- spaCy (3.0.5): https://github.com/tesseract-ocr/tesseract/releases/tag/4.1.1 - spaCy medium sized models (3.0.0): - - https://github.com/explosion/spacy-models/releases/tag/da_core_news_md-3.0.0 - https://github.com/explosion/spacy-models/releases/tag/de_core_news_md-3.0.0 - - https://github.com/explosion/spacy-models/releases/tag/el_core_news_md-3.0.0 - https://github.com/explosion/spacy-models/releases/tag/en_core_web_md-3.0.0 - - https://github.com/explosion/spacy-models/releases/tag/es_core_news_md-3.0.0 - - https://github.com/explosion/spacy-models/releases/tag/fr_core_news_md-3.0.0 - https://github.com/explosion/spacy-models/releases/tag/it_core_news_md-3.0.0 - https://github.com/explosion/spacy-models/releases/tag/nl_core_news_md-3.0.0 - - https://github.com/explosion/spacy-models/releases/tag/pt_core_news_md-3.0.0 - - https://github.com/explosion/spacy-models/releases/tag/ru_core_news_md-3.0.0 + - https://github.com/explosion/spacy-models/releases/tag/pl_core_news_md-3.0.0 - https://github.com/explosion/spacy-models/releases/tag/zh_core_web_md-3.0.0 @@ -29,7 +24,7 @@ mkdir -p //input //output 2. Place your text files inside `//input`. Files should all contain text of the same language. -3. Start the pipeline process. Check the [Pipeline arguments](#pipeline-arguments) section for more details. +3. Start the pipeline process. Check the pipeline help (`nlp --help`) for more details. ``` # Option one: Use the wrapper script ## Install the wrapper script (only on first run). Get it from https://gitlab.ub.uni-bielefeld.de/sfb1288inf/nlp/-/raw/1.0.0/wrapper/nlp, make it executeable and add it to your ${PATH} @@ -51,38 +46,3 @@ docker run \ ``` 4. Check your results in the `//output` directory. -``` - -### Pipeline arguments - -`--check-encoding` -* If set, the pipeline tries to automatically determine the right encoding for -your texts. Only use it if you are not sure that your input is provided in UTF-8. -* default = False -* required = False - -`-l languagecode` -* Tells spaCy which language will be used. -* options = da (Danish), de (German), el (Greek), en (English), es (Spanish), fr (French), it (Italian), nl (Dutch), pt (Portuguese), ru (Russian), zh (Chinese) -* required = True - -`--nCores corenumber` -* Sets the number of CPU cores being used during the NLP process. -* default = min(4, multiprocessing.cpu_count()) -* required = False - -``` bash -# Example with all arguments used -docker run \ - --rm \ - -it \ - -u $(id -u $USER):$(id -g $USER) \ - -v "$HOME"/ocr/input:/input \ - -v "$HOME"/ocr/output:/output \ - gitlab.ub.uni-bielefeld.de:4567/sfb1288inf/nlp:1.0.0 \ - -i /input \ - -l en \ - -o /output \ - --check-encoding \ - --nCores 8 \ -``` diff --git a/nlp b/nlp index e97e0f4..a8a1c3c 100755 --- a/nlp +++ b/nlp @@ -156,7 +156,7 @@ def parse_args(): type=int) parser.add_argument('--n-cores', default=min(4, multiprocessing.cpu_count()), - help='Number of CPU threads to be used', + help='Number of CPU threads to be used (Default: min(4, number of CPUs))', type=int) parser.add_argument('--zip', help='Create one zip file per filetype')