Updated README.md
This commit is contained in:
parent
df6e5339c2
commit
d71bed2a3c
38
README.md
38
README.md
@ -35,26 +35,38 @@ The actual data can be found here: https://gitlab.ub.uni-bielefeld.de/sporada/bu
|
|||||||
|
|
||||||
## Import the data into the database
|
## Import the data into the database
|
||||||
|
|
||||||
1. Befor importing the data we have to setup the tables in the PostgreSQL database.
|
1. Befor importing the data we have to setup the tables in the PostgreSQL database.
|
||||||
- Do this with `docker-compose run web python manage.py makemigrations`
|
- Do this with `docker-compose run web python manage.py makemigrations`
|
||||||
- followed by `docker-compose run web python manage.py migrate`.
|
- followed by `docker-compose run web python manage.py migrate`.
|
||||||
11. Now the data for the ngrams, speeches, and speakers has to be imported into the database of the app.
|
11. Now the data for the ngrams, speeches, and speakers has to be imported into the database of the app.
|
||||||
12. Shutdown the app with the command `docker-compose down`.
|
12. Shutdown the app with the command `docker-compose down`.
|
||||||
13. Change the owner rights of all files in the repository. (This step should only be necessary for linux systems.)
|
13. Change the owner rights of all files in the repository. (This step should only be necessary for linux systems.)
|
||||||
- This has to be done because every process inside a docker container is always executed with root privilage. Thus the created volumes are not accessable anymore.
|
- This has to be done because every process inside a docker container is always executed with root privilage. Thus the created volumes are not accessable anymore.
|
||||||
- Change the rights with `sudo chown -R $USER:$USER`.
|
- Change the rights with `sudo chown -R $USER:$USER`.
|
||||||
12. Download the folders *MdB\_data* and *outputs* from the link mentioned in [this repository](https://gitlab.ub.uni-bielefeld.de/sporada/bundesdata_markup_nlp_data).
|
12. Download the folders *MdB\_data* and *outputs* from the link mentioned in [this repository](https://gitlab.ub.uni-bielefeld.de/sporada/bundesdata_markup_nlp_data).
|
||||||
- Copy those into the folder *input_volume* which is located inside the web app repository on the root level.
|
- Copy those into the folder *input_volume* which is located inside the web app repository on the root level.
|
||||||
- If the downloaded folders are inside an archive extract the folders first.
|
- If the downloaded folders are inside an archive extract the folders first.
|
||||||
- The folder *input_volume* is a volume which is mounted into the web app container. The contianer is able to read every data inside that volume. Note that the volume is accessed with the path */usr/src/app/input_data* not */usr/src/app/input_volume*.
|
- The folder *input_volume* is a volume which is mounted into the web app container. The contianer is able to read every data inside that volume. Note that the volume is accessed with the path */usr/src/app/input_data* not */usr/src/app/input_volume*.
|
||||||
13. Restart the app with `docker-compose up`
|
13. Restart the app with `docker-compose up`
|
||||||
13. First we have to import the speaker data. This will be done by executing following command `docker-compose run web python manage.py import_speakers /usr/src/app/input_data/MdB_data/MdB_Stammdaten.xml` in the second terminal.
|
13. First we have to import the speaker data.
|
||||||
14. After that we can import all the protocols and thus all speeches for every person. The command to do that is `docker-compose run web python manage.py import_protocols /usr/src/app/input_data/outputs/markup/full_periods` (Importing all protocols takes up to 2 days. For testing purposes *dev\_data/beautiful\_xml* or *test\_data/beautiful\_xml* can be used.)
|
- This will be done by executing following command `docker-compose run web python manage.py import_speakers /usr/src/app/input_data/MdB_data/MdB_Stammdaten.xml` in the second terminal.
|
||||||
15. Now the n-grams can be imported by using `docker-compose run web python manage.py import_ngrams_bulk 1 /usr/src/app/input_data/outputs/nlp/full_periods/n-grams/lm_ns_year/1_grams lm_ns_year`. This command imports the alphabetically splitted n-grams into their according tables. First parameter of this command is *1*. This tells the function to import the n-grams from the input path as 1-grams. Therefore the second parameter is the inputpath */usr/src/app/input_data/outputs/nlp/full_periods/n-grams/lm_ns_year/1_grams* where the 1-grams are located. The last part of the input path clearly identifies the n-grams as 1-grams. Finally the third parameter identifies what kind of n-grams are being imported. In this case the parameter is set to *lm_ns_year* which means the ngrams are based on lemmatized text without stopwords counted by year. An example to import 2-grams would look like this `docker-compose run web python manage.py import_ngrams_bulk 2 /usr/src/app/input_data/outputs/nlp/full_periods/n-grams/lm_ns_year/2_grams lm_ns_year`. To import 3-grams from a different corpus the command for example should look like this: `docker-compose run web python manage.py import_ngrams_bulk 3 /usr/src/app/input_data/outputs/nlp/full_periods/n-grams/tk_ws_speaker_\(1-3\)/3_grams tk_ws_speaker`. Be careful when importing the n-grams. **If the parameters are set wrong, the n-grams will be imported into the wrong tables and thus leading to incorrect findings using the Ngram Viewer.** If you did something wrong you can reset the database with `docker-compose run web python manage.py flush` and start the data import again. It is possible to import different n-gram sets at the same time using multiple commands in multiple terminals. Just keep an eye out on the CPU and RAM usage. There is also an optional fourth parameter to set the batch size of one insert. The default is set to read 1 million rows from the csv and insert them at once into the database. The parameter `-bs 10000000` would set it to 10 million. Increasing that value also increases the RAM usage so be careful with that.
|
14. After that we can import all the protocols and thus all speeches for every person.
|
||||||
16. Repeate the step above for every kind of n-gram data you want to import. Importing 1-grams will only take some minutes while importing 5-grams will take several hours. (For testing purposes the n-grams from *dev\_data* can be used.)
|
- The command to do that is `docker-compose run web python manage.py import_protocols /usr/src/app/input_data/outputs/markup/full_periods` (Importing all protocols takes up to 2 days. For testing purposes *dev\_data/beautiful\_xml* or *test\_data/beautiful\_xml* can be used.)
|
||||||
17. After importing the n-grams the web app is all set up.
|
15. Now the n-grams can be imported by using `docker-compose run web python manage.py import_ngrams_bulk 1 /usr/src/app/input_data/outputs/nlp/full_periods/n-grams/lm_ns_year/1_grams lm_ns_year`.
|
||||||
18. The app can be shut down with `docker-compose down`. All imported data is saved persistently in the database volume *postgres_data*.
|
- This command imports the alphabetically splitted n-grams into their according tables.
|
||||||
19. To restart the app use `docker-compose up` or `docker-compose -d` to start it detatched.
|
- First parameter of this command is *1*. This tells the function to import the n-grams from the input path as 1-grams.
|
||||||
|
- Therefore the second parameter is the inputpath */usr/src/app/input_data/outputs/nlp/full_periods/n-grams/lm_ns_year/1_grams* where the 1-grams are located. The last part of the input path clearly identifies the n-grams as 1-grams.
|
||||||
|
- Finally the third parameter identifies what kind of n-grams are being imported. In this case the parameter is set to *lm_ns_year* which means the ngrams are based on lemmatized text without stopwords counted by year.
|
||||||
|
- An example to import 2-grams would look like this `docker-compose run web python manage.py import_ngrams_bulk 2 /usr/src/app/input_data/outputs/nlp/full_periods/n-grams/lm_ns_year/2_grams lm_ns_year`.
|
||||||
|
- To import 3-grams from a different corpus the command for example should look like this: `docker-compose run web python manage.py import_ngrams_bulk 3 /usr/src/app/input_data/outputs/nlp/full_periods/n-grams/tk_ws_speaker_\(1-3\)/3_grams tk_ws_speaker`.
|
||||||
|
- Be careful when importing the n-grams. **If the parameters are set wrong, the n-grams will be imported into the wrong tables and thus leading to incorrect findings using the Ngram Viewer.**
|
||||||
|
- If you did something wrong you can reset the database with `docker-compose run web python manage.py flush` and start the data import again.
|
||||||
|
- It is possible to import different n-gram sets at the same time using multiple commands in multiple terminals. Just keep an eye out on the CPU and RAM usage.
|
||||||
|
- There is also an optional fourth parameter to set the batch size of one insert. The default is set to read 1 million rows from the csv and insert them at once into the database. The parameter `-bs 10000000` would set it to 10 million. Increasing that value also increases the RAM usage so be careful with that.
|
||||||
|
16. Repeate the step above for every kind of n-gram data you want to import. Importing 1-grams will only take some minutes while importing 5-grams will take several hours. (For testing purposes the n-grams from *dev\_data* can be used.)
|
||||||
|
17. After importing the n-grams the web app is all set up.
|
||||||
|
18. The app can be shut down with `docker-compose down`. All imported data is saved persistently in the database volume *postgres_data*.
|
||||||
|
19. To restart the app use `docker-compose up` or `docker-compose -d` to start it detatched.
|
||||||
|
|
||||||
# Security settings for hosting your own public version
|
# Security settings for hosting your own public version
|
||||||
Before hosting you own version of this website pulblicly do not forget to change the PostgreSQL username, password etc. in *docker-compose.yml* and *app/bundesdata_app/settings.py*. Also change the secret key mentioned in *app/bundesdata_app/settings.py* to a new django key that you will keep secret! Also keep in mind that the current version is not HTTPS ready.
|
Before hosting you own version of this website pulblicly do not forget to change the PostgreSQL username, password etc. in *docker-compose.yml* and *app/bundesdata_app/settings.py*. Also change the secret key mentioned in *app/bundesdata_app/settings.py* to a new django key that you will keep secret! Also keep in mind that the current version is not HTTPS ready.
|
||||||
|
Loading…
Reference in New Issue
Block a user