Added some documentation.
This commit is contained in:
parent
9ef4c51c9d
commit
714f61315b
135
README.md
135
README.md
@ -1,74 +1,91 @@
|
|||||||
# Input and Output data
|
# What is this?
|
||||||
|
This software is used to automatically mark the official protocols of the Bundestag.
|
||||||
|
The Bundestag published protocols of every session since 1949 till 2017 in XML.
|
||||||
|
Unforutnatley the markup of those is very rudimentary. It is not possible to see
|
||||||
|
which member of parliament hold what speech etc.
|
||||||
|
|
||||||
|
This software can mark every protocol from 1949 till 2017 automatically. The
|
||||||
|
software identifies speakers, their speeches, metadata etc. For detailed information
|
||||||
|
why this software was made and how it works, read the corresponding master thises
|
||||||
|
uploaded [here](#) (It is written in german though).
|
||||||
|
|
||||||
|
Besides the markup the software can also calculate ngrams for all automatically
|
||||||
|
marked protocols either from lemmatized or just tokenized text with or without
|
||||||
|
stopwords.
|
||||||
|
|
||||||
|
|
||||||
|
## Web ap based on the protocols and ngrams
|
||||||
|
|
||||||
|
The protocols and ngrams are used for differend functiosn of a django web application.
|
||||||
|
The web application displays the protocols, speeches and corresponding speakers
|
||||||
|
for research purposes.
|
||||||
|
|
||||||
|
The web app also provides an Ngram Viewer based on the produced ngram data that
|
||||||
|
displays ngram frequencies for all protocols from 1949 till 2017. The Ngram Viewer
|
||||||
|
is similar to the [Google Ngram Viewer](https://books.google.com/ngrams).
|
||||||
|
|
||||||
|
The source code of the web application can be found here: https://gitlab.ub.uni-bielefeld.de/sporada/bundesdata_web_app
|
||||||
|
A live version of the app is accessible from inside the University Bielefeld
|
||||||
|
network by visiting http://129.70.12.88:8000/.
|
||||||
|
|
||||||
|
## Input and Output data
|
||||||
The input and output data of this software can be found here: https://gitlab.ub.uni-bielefeld.de/sporada/bundesdata_markup_nlp_data
|
The input and output data of this software can be found here: https://gitlab.ub.uni-bielefeld.de/sporada/bundesdata_markup_nlp_data
|
||||||
# Master_thesis
|
You can find all automatically marked protocols and ngrams there. Also the
|
||||||
Master Thesis Repository.
|
official protocols used as input data are included.
|
||||||
|
|
||||||
## Benötigte Pakete und Sprachen
|
# Installation and usage
|
||||||
|
|
||||||
- Python 3.7+
|
## requirements
|
||||||
- Python Pakete werden mittels requirements.txt installiert. Siehe Installation Schritt 2.
|
- Python 3.7.1+
|
||||||
|
- Python python3.7-dev
|
||||||
|
- js-beautify (optional if corresponding step is skipped)
|
||||||
|
- virtualenv
|
||||||
|
- unix-like os
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
1. Stellen sie sicher, dass das Paket `python3.7-dev` installiert ist. Wenn nicht: `sudo apt-get install python3.7-dev`
|
|
||||||
1. Installieren Sie _virtualenv_ mittels `pip install virtualenv`. Oder dem jeweiligen package manager der eigenen Distribution.
|
|
||||||
2. Installieren Sie JS Beautifier systemweit `sudo npm -g install js-beautify` (Optional! Wenn nicht gewünscht, kann der Schritt übersprungen werden. Der Schritt welches dieses Paket während der Auszeichnung benötigt kann übersprungen werden. Allerdings gibt es so keine schön formatierten XML-Dateien.)
|
|
||||||
3. Erstelle virtual environment für das Projekt mittels `virtualenv --python=python3.7 path/to/folder`
|
|
||||||
4. Aktivieren der virtuellen Umgebung mittels `source path/to/folder/bin/activate`
|
|
||||||
5. `cd verzeichnis/des/repository`
|
|
||||||
6. Installieren der Abhängigkeiten mit `pip install -r requirements.txt`.
|
|
||||||
|
|
||||||
## Scriptaufrufe Beispiele:
|
0. Install the needed requirements mentioned above. Install _js-beatify_ following one of the steps mentioned here: https://github.com/beautify-web/js-beautify#installation. Installing and using _js-beautify_ is optional. How to skip the usage is mentioned in the section below.
|
||||||
|
1. Clone this repository with `git clone https://gitlab.ub.uni-bielefeld.de/sporada/bundesdata_markup_nlp_software.git`.
|
||||||
|
3. Create a virtual environment for the software `virtualenv --python=python3.7 path/to/folder/of/your/choice`.
|
||||||
|
4. Activate the virtual environment with `source path/to/folger/bin/activate`
|
||||||
|
2. Navigate into the cloned repository with `cd path/to/reopsitory`.
|
||||||
|
3. Install all requirements mentioned in _requirements.txt_ with `pip install -r requirements.txt`.
|
||||||
|
4. Move down into _bundesdata\_markup\_nlp_ with `cd bundesdata_markup_nlp`.
|
||||||
|
5. Execute `./bundesdata_markup.py -h` or `python bundesdata_markup.py -h` to verify the successful installation.
|
||||||
|
6. If the help shows up you a ready to go.
|
||||||
|
|
||||||
### @Home
|
## Usage
|
||||||
- `source ~/VirtualEnvs/bundesdata/bin/activate`
|
|
||||||
- `cd ~/Documents/Eigene\ geschriebene\ Programme/master_thesis/bundesdata/`
|
|
||||||
|
|
||||||
#### Development Data
|
### Markup process
|
||||||
|
|
||||||
**Metadata**
|
1. Downlaod some protocols to use them as an input for the markup process. You can either download some files from https://gitlab.ub.uni-bielefeld.de/sporada/bundesdata_markup_nlp_data including the _development\_data\_xml_ set found in _inputs_. Or download the protocols directly from https://www.bundestag.de/services/opendata. Only protocols from period 1 to 18 can be used as an input.
|
||||||
-`python markup/metastructure.py -p /home/stephan/Documents/Eigene\ geschriebene\ Programme/master_thesis/data/working_data/development_data_xml -f *.xml -o /home/stephan/Documents/Eigene\ geschriebene\ Programme/master_thesis/data/working_data`
|
2. Place the protocols you want to mark in one directory. The directory can contain one level of sub directories in example for protocols of different periods. This tutorial will continue using the folder _development\_data\_xml_.
|
||||||
|
3. Now you can start the markup process by executing following command `./bundesdata_markup.py -sp /path/to/development_data_xml /path/to/some/folder/for/the/output`.
|
||||||
|
4. After completion the marked protocols can be found in the folder _beautiful\_xml_ inside the specified output folder.
|
||||||
|
5. To skip the step that uses _js-beautify_ execute the command `./bundesdata_markup.py -sp /path/to/development_data_xml /path/to/some/folder/for/the/output -kt -sb`
|
||||||
|
6. The non beautified protocols can be found in _clear\_speech\_markup_. Notice that all other tmp folders containing the intermediate protocols are also kept in the output folder. This is due to using the `-kt` parameter.
|
||||||
|
7. The marked protocols can now be used as an input to calculate different n-grams.
|
||||||
|
|
||||||
**Speakers**
|
### N-grams
|
||||||
- `python markup/speakers.py -p /home/stephan/Documents/Eigene\ geschriebene\ Programme/master_thesis/data/working_data/xml_new_metadata_structure -f *.xml -o /home/stephan/Documents/Eigene\ geschriebene\ Programme/master_thesis/data/working_data`
|
1. Before calculating the n-grams the protocols have either to be lemmatized or tokenized.
|
||||||
|
|
||||||
#### Full data
|
#### Lemmatize
|
||||||
|
|
||||||
**Metadata**
|
2. To lemmatize the protocols execute `./bundesdata_nlp.py -lm -ns -sp /path/to/output/beautiful_xml /path/to/some/folder/for/the/output` or `./bundesdata_nlp.py -lm -ns -sp /path/to/clear_speech_markup /path/to/some/folder/for/the/output` if you want to use non beautified files. Notice that the parameter `-ns` removes stop words from the lemmatized text. To include stopwords remove the parameter.
|
||||||
-`python markup/metastructure.py -p /home/stephan/Documents/Eigene\ geschriebene\ Programme/master_thesis/data/protocols_raw_xml -f *.xml -o /home/stephan/Documents/Eigene\ geschriebene\ Programme/master_thesis/data`
|
3. The lemmatized protocols can be found in _nlp\_output/nlp_beuatiful_xml_. These protocols are also beautified using _js-beautify_.
|
||||||
|
4. If you want to skip the beautification add the parameter `-sb`. Non beautified protocols are found in _nlp\_output/lemmatized_.
|
||||||
|
|
||||||
**Speakers**
|
#### Tokenize
|
||||||
- `python markup/speakers.py -p /home/stephan/Documents/Eigene\ geschriebene\ Programme/master_thesis/data/xml_new_metadata_structure -f *.xml -o /home/stephan/Documents/Eigene\ geschriebene\ Programme/master_thesis/data`
|
|
||||||
|
|
||||||
### @Uni
|
1. To tokenize the protocols execute `./bundesdata_nlp.py -tn -ns -sp /path/to/output/beautiful_xml /path/to/some/folder/for/the/output` or `./bundesdata_nlp.py -tn -ns -sp /path/to/clear_speech_markup /path/to/some/folder/for/the/output` if you want to use non beautified files. Notice that the parameter `-ns` removes stop words from the tokenized text. To include stopwords remove the parameter.
|
||||||
|
3. The tokenized protocols can be found in _nlp\_output/nlp_beuatiful_xml_. These protocols are also beautified using _js-beautify_.
|
||||||
|
4. If you want to skip the beautification add the parameter `-sb`. Non beautified protocols are found in _nlp\_output/lemmatized_.
|
||||||
|
|
||||||
|
#### Calculating the n-grams
|
||||||
#### Development Data
|
1. Now the lemmatized or tokenized (either with our without stop words) protocols can be used as an input for the n-gram calculation. The following steps will be explained using the beautified protocols from _nlp\_beuatiful\_xml_.
|
||||||
- `source /home/stephan/VirtualEnvs/bundesdata/bin/activate`
|
2. To calculate the n-grams for the lemmatized protocols without stop words per year use the command `./bundesdata_nlp.py -cn year lm_ns_year -sp /path/to/nlp_output/nlp_beuatiful_xml/ /path/to/some/folder/for/the/output/`
|
||||||
- `cd /home/stephan/Repos/master_thesis/bundesdata`
|
3. After that move a copy of _bundesdata\_markup\_nlp/utility/move\_ngrams.py_ into the folder _nlp\_output/n-grams_ and execute it with `python move_ngrams.py`.
|
||||||
|
4. The n-grams are now ready to be imported into the database of the django web app. (The source code for the app and a tutorial for importing the ngrams can be found here: https://gitlab.ub.uni-bielefeld.de/sporada/bundesdata_web_app)
|
||||||
**Speakers**
|
5. If you want to calculate n-grams from tokenized protocols without stopwords per year use this command: `./bundesdata_nlp.py -cn year tk_ns_year -sp /path/to/nlp_output/nlp_beuatiful_xml/ /path/to/some/folder/for/the/output/`.
|
||||||
- `python markup/speakers.py -p /home/stephan/Repos/master_thesis/data/working_data/xml_new_metadata_structure -f *.xml -o /home/stephan/Repos/master_thesis/data/working_data`
|
6. If you want to calculate n-grams from tokenized protocols with stopwords per speaker use this command: `./bundesdata_nlp.py -cn speaker tk_ws_speaker -sp /path/to/nlp_output/nlp_beuatiful_xml/ /path/to/some/folder/for/the/output/`.
|
||||||
|
7. The parameter `-cn` is always followed by two arguments (Example: `-cn year lm_ns_year`). The first is used to specifie how the n-grams are counted. It can be set to "year", "mont_year", "speaker" or "speech". N-grams will then be count by year, speaker and so on. The second argument is a user specified string to identify from what kind of protocols the n-grams have been calculated. The string "lm_ns_year" for example describes that the input protocols have been lemmatized (lm) and contain no stop words (ns). The last part (year) specifies that the n-grams have been calculated by year.
|
||||||
**Metadata**
|
|
||||||
-`python markup/metastructure.py -p /home/stephan/Repos/master_thesis/data/working_data/development_data_xml -f *.xml -o /home/stephan/Repos/master_thesis/data/working_data`
|
|
||||||
|
|
||||||
|
|
||||||
#### Test Data
|
|
||||||
- `source /home/stephan/VirtualEnvs/bundesdata/bin/activate`
|
|
||||||
- `cd /home/stephan/Repos/master_thesis/bundesdata`
|
|
||||||
|
|
||||||
**Speakers**
|
|
||||||
- `python markup/speakers.py -p /home/stephan/Repos/master_thesis/data/working_data/test/xml_new_metadata_structure -f *.xml -o /home/stephan/Repos/master_thesis/data/working_data/test`
|
|
||||||
|
|
||||||
**Metadata**
|
|
||||||
-`python markup/metastructure.py -p /home/stephan/Repos/master_thesis/data/working_data/test_data_xml -f *.xml -o /home/stephan/Repos/master_thesis/data/working_data/test`
|
|
||||||
|
|
||||||
#### Full data
|
|
||||||
- `source /home/stephan/VirtualEnvs/bundesdata/bin/activate`
|
|
||||||
- `cd /home/stephan/Repos/master_thesis/bundesdata`
|
|
||||||
|
|
||||||
**Speakers**
|
|
||||||
- `python markup/speakers.py -p /home/stephan/Repos/master_thesis/data/xml_new_metadata_structure -f *.xml -o /home/stephan/Repos/master_thesis/data`
|
|
||||||
|
|
||||||
**Metadata**
|
|
||||||
-`python markup/metastructure.py -p /home/stephan/Repos/master_thesis/data/protocols_raw_xml -f *.xml -o /home/stephan/Repos/master_thesis/data`
|
|
||||||
|
@ -85,7 +85,7 @@ def parse_arguments():
|
|||||||
parser.add_argument("-fr",
|
parser.add_argument("-fr",
|
||||||
"--fresh_run",
|
"--fresh_run",
|
||||||
help="Deltes all temporary folders in output folder \
|
help="Deltes all temporary folders in output folder \
|
||||||
also deletes all paths saved in the config file file \
|
also deletes all paths saved in the config file \
|
||||||
before starting the markup process. The user has to set\
|
before starting the markup process. The user has to set\
|
||||||
the paths again with -sp.",
|
the paths again with -sp.",
|
||||||
action="store_true",
|
action="store_true",
|
||||||
|
@ -17,6 +17,7 @@ This script handles the tokenization, lemmatization and ngramm calculation of
|
|||||||
the input protocols. Needs some user input specfied int parse_arguments().
|
the input protocols. Needs some user input specfied int parse_arguments().
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
def parse_arguments():
|
def parse_arguments():
|
||||||
"""
|
"""
|
||||||
Argument Parser
|
Argument Parser
|
||||||
|
@ -31,8 +31,16 @@ date_string = [\d\t ]*Deutscher Bundestag (?:–|—|-|--) \d{1,2} ?\. Wahlperio
|
|||||||
multiline_comment = \B\([^\(\)]* ; [^\(\)]*\)\B ; kommentar
|
multiline_comment = \B\([^\(\)]* ; [^\(\)]*\)\B ; kommentar
|
||||||
|
|
||||||
[File paths]
|
[File paths]
|
||||||
output_folder = /home/stephan/Desktop/output
|
output_folder = /home/stephan/Downloads/output
|
||||||
input_folder_xmls = /home/stephan/Desktop/18_Wahlperiode_2013-2017/
|
input_folder_xmls = /home/stephan/Downloads/development_data_xml
|
||||||
new_metadata = /home/stephan/Desktop/output/new_metadata
|
new_metadata = /home/stephan/Downloads/output/new_metadata
|
||||||
new_simple_markup = /home/stephan/Desktop/output/simple_xml
|
new_simple_markup = /home/stephan/Downloads/output/simple_xml
|
||||||
|
complex_markup = /home/stephan/Downloads/output/complex_markup
|
||||||
|
clear_speech_markup = /home/stephan/Downloads/output/clear_speech_markup
|
||||||
|
tmp_path = /home/stephan/Downloads/nlp_output/lemmatized/tmp
|
||||||
|
beautiful_xml = /home/stephan/Downloads/output/beautiful_xml
|
||||||
|
nlp_output = /home/stephan/Downloads/nlp_output
|
||||||
|
nlp_input = /home/stephan/Downloads/nlp_output/nlp_beuatiful_xml/
|
||||||
|
nlp_lemmatized_tokenized = /home/stephan/Downloads/nlp_output/lemmatized
|
||||||
|
nlp_beuatiful_xml = /home/stephan/Downloads/nlp_output/nlp_beuatiful_xml
|
||||||
|
|
||||||
|
17375
bundesdata_markup_nlp/logs/bundesdata.log
Normal file → Executable file
17375
bundesdata_markup_nlp/logs/bundesdata.log
Normal file → Executable file
File diff suppressed because it is too large
Load Diff
1
bundesdata_markup_nlp/logs/bundesdata_nlp.log
Normal file → Executable file
1
bundesdata_markup_nlp/logs/bundesdata_nlp.log
Normal file → Executable file
@ -0,0 +1 @@
|
|||||||
|
2019/03/03 18:31:14 __main__ INFO:Start time of script is: 2019-03-03 18:31:14.664969
|
Loading…
Reference in New Issue
Block a user