diff --git a/app/templates/_base/_modals/_manual/01_introduction.html.j2 b/app/templates/_base/_modals/_manual/01_introduction.html.j2 index e7ea688a..db435050 100644 --- a/app/templates/_base/_modals/_manual/01_introduction.html.j2 +++ b/app/templates/_base/_modals/_manual/01_introduction.html.j2 @@ -11,7 +11,7 @@
Now, using the files in .vrt format, you can create a corpus. This can be done -in the Dashboard or Corpus Analysis under “My Corpora.” Click on “Create corpus” -and add a title and description for your corpus. After submitting, navigate down to -the “Corpus files” section. Once you have added the desired .vrt files, select “Build” -on the corpus page under “Actions.” Now, your corpus is ready for analysis.
+in the Dashboard or +Corpus Analysis sections under “My Corpora.” Click on “Create corpus” +and add a title and description for your corpus. After submitting, you will automatically +be taken to the corpus overview page (which can be called up again via the corpus lists) +of your new, still empty corpus. ++Further down in the “Corpus files” section, you can add texts in .vrt format +(results of the NLP service) to your new corpus. To do this, use the "Add Corpus File" +button and fill in the form that appears. Here, you can add +metadata to each text. After adding all texts to the corpus, it must +be prepared for analysis. This process can be initiated by clicking on the +"Build" button under "Actions". +On the corpus overview page, you can see information about the current status of +the corpus in the upper right corner. After the build process, the status "built" should be shown here. +Now, your corpus is ready for analysis.
Navigate to the corpus you would like to analyze and click on the Analyze button. This will take you to an analysis overview page for your corpus. Here, you can find a diff --git a/app/templates/_base/_modals/_manual/03_dashboard.html.j2 b/app/templates/_base/_modals/_manual/03_dashboard.html.j2 index ab14ce51..0982ae59 100644 --- a/app/templates/_base/_modals/_manual/03_dashboard.html.j2 +++ b/app/templates/_base/_modals/_manual/03_dashboard.html.j2 @@ -16,7 +16,7 @@ A job is an initiated file processing procedure. A model is a mathematical system for pattern recognition based on data examples that have been processed by AI. One can search for jobs as well as corpus listings using the search field displayed above them on the dashboard. - Models can be found and edited by clicking on the corresponding service under My Contributions. + Uploaded models can be found and edited by clicking on the corresponding service under My Contributions.
Coming soon...
+The SpaCy NLP Pipeline extracts +information from plain text files (.txt format) via computational linguistic data processing +(tokenization, lemmatization, part-of-speech tagging and named-entity recognition). +To use this service, select the corresponding .txt file, the language model, and the +version you want to use. When the job is finished, find and download the files in +.json and .vrt format under “Results.”
- With the corpus analysis service, it is possible to create a text corpus + With the Corpus Analysis + service, it is possible to create a text corpus and then explore through it with analytical tools. The analysis session is realized on the server side by the Open Corpus Workbench software, which enables - efficient and complex searches with the help of the CQP Query Language. + efficient and complex searches with the help of the CQP Query Language.
++ To use this service, navigate to the corpus you would like to analyze and click on the Analyze button. + This will take you to an analysis overview page for your corpus. Here, you can find + a visualization of general linguistic information of your corpus, including tokens, + sentences, unique words, unique lemmas, unique parts of speech and unique simple + parts of speech. You will also find a pie chart of the proportional textual makeup + of your corpus and can view the linguistic information for each individual text file. + A more detailed visualization of token frequencies with a search option is also on + this page. +
++ From the corpus analysis overview page, you can navigate to other analysis modules: + the Query Builder (under Concordance) and the Reader. With the Reader, you can read + your corpus texts tokenized with the associated linguistic information. The tokens + can be shown as lemmas, parts of speech, words, and can be displayed in different + ways: visually as plain text with the option of highlighted entities or as chips. +
++ The Concordance module allows for more specific, query-oriented text analyses. + Here, you can filter out text parameters and structural attributes in different + combinations. This is explained in more detail in the Query Builder section of the + manual. +