|
3 | 3 | Hello all! This is a little example of using :hugs: [huggingface transformers](https://github.com/huggingface/transformers) and [Flask-RESTful](https://flask-restful.readthedocs.io/en/latest/index.html) to create a question answering API.
|
4 | 4 |
|
5 | 5 | ### Install
|
6 |
| -1. The only requirements are [Git](https://www.digitalocean.com/community/tutorials/how-to-install-git-on-ubuntu-20-04) and [Python3](https://docs.python-guide.org/starting/install3/linux/) with [pip](https://pip.pypa.io/en/stable/installing/) installed in a Linux environment. If you are using Windows I recommend [installing Ubuntu for Windows](https://ubuntu.com/tutorials/ubuntu-on-windows). If you don't have pip installed, you can open a terminal and enter: |
7 |
| - ```bash |
8 |
| - curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py |
9 |
| - python3 get-pip.py |
10 |
| - ``` |
11 |
| -2. In the same or a new terminal enter: |
12 |
| - ```bash |
13 |
| - cd /path/to/question_answering_api # Where ever you forked it to. I don't know! |
14 |
| - python3 -m pip install requirements.txt |
15 |
| - ``` |
16 |
| - and that should install the requirements for the Question Answering API. |
| 6 | +The only real requirement is a Linux environment. If you are using Windows I recommend [installing Ubuntu for Windows](https://ubuntu.com/tutorials/ubuntu-on-windows). To install the needed software dependencies run: |
| 7 | +```bash |
| 8 | +cd /path/to/question_answering_api |
| 9 | +bash install_dependencies.sh |
| 10 | +``` |
17 | 11 |
|
18 | 12 | ### Usage
|
19 | 13 | 1. #### Start the API server
|
@@ -54,9 +48,43 @@ Context:
|
54 | 48 | largest and most biodiverse tract of tropical rainforest
|
55 | 49 | in the world, with an estimated 390 billion individual
|
56 | 50 | trees divided into 16,000 species.
|
57 |
| -
|
58 | 51 | Question:
|
59 | 52 | Which name is also used to describe the Amazon rainforest in English?
|
60 | 53 | Answer:
|
61 | 54 | Amazonia.
|
62 |
| -``` |
| 55 | +``` |
| 56 | +
|
| 57 | +### Docker |
| 58 | +
|
| 59 | +To run the API inside a container you need to take the following steps: |
| 60 | +1. #### Install docker |
| 61 | + Follow the instructions [here](https://docs.docker.com/engine/install/) to install docker on your system. |
| 62 | +2. #### Download the model and tokenizer |
| 63 | + We don't want to put large machine learning models inside our containers if we don't have to, so we fetch the models from huggingface.co so we can mount them inside a volume for docker. Open a terminal and run: |
| 64 | + ```bash |
| 65 | + cd /path/to/question_answering_api |
| 66 | + bash fetch_model.sh |
| 67 | + ``` |
| 68 | + This will pull the model and save it to a directory we can mount as a volume for our container. |
| 69 | +3. #### Build the container |
| 70 | + In the same or a new terminal, run: |
| 71 | + ```bash |
| 72 | + cd /path/to/question_answering_api # Optional if you're in repo root already. |
| 73 | + # Build container and name image qa-api with version tag v1. |
| 74 | + docker build -t qa-api:v1 |
| 75 | + ``` |
| 76 | +4. #### Start the container |
| 77 | + In the same terminal, type in: |
| 78 | + ```bash |
| 79 | + docker run \ |
| 80 | + -p 5000:5000 \ # Map port 5000 in container to port localhost:5000 |
| 81 | + -v /path/to/question_answering_api/models:/app/models \ # Use abspath! |
| 82 | + qa-api:v1 |
| 83 | + ``` |
| 84 | +5. #### Run the client |
| 85 | + In a new terminal window (just like before, we need two open), run the following: |
| 86 | + ```bash |
| 87 | + # Make sure you're in repo root! |
| 88 | + python3 question_answering_api.py |
| 89 | + ``` |
| 90 | +And that's it! If you want to host your container in the cloud now it's as easy as saying `docker push`. |
0 commit comments