Skip to content

Fail to repeat the accuracy of the pretrained VGG model #27

@xhzhao

Description

@xhzhao

I download the pretrained mode here: https://filebox.ece.vt.edu/~jiasenlu/codeRelease/vqaRelease/train_val/pretrained_lstm_train-val_test

and the corresponding features here:
https://filebox.ece.vt.edu/~jiasenlu/codeRelease/vqaRelease/train_val/data_train-val_test.zip

There is no error when i run eval.lua.
After i put the result files to here: https://github.com/VT-vision-lab/VQA
I came across the following error:

"
loading VQA annotations and questions into memory...
0:00:07.128280
creating index...
index created!
Loading and preparing results...
Traceback (most recent call last):
File "vqaEvalDemo.py", line 31, in
vqaRes = vqa.loadRes(resFile, quesFile)
File "../../VQA/PythonHelperTools/vqaTools/vqa.py", line 165, in loadRes
'Results do not correspond to current VQA set. Either the results do not have predictions for all question ids in annotation file or there is atleast one question id that does not belong to the question ids in the annotation file.'
AssertionError: Results do not correspond to current VQA set. Either the results do not have predictions for all question ids in annotation file or there is atleast one question id that does not belong to the question ids in the annotation file.
"

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions