Finds the "magic" and the most happy moments of a given movie and makes a trailer out of it.
-
A CPU which supports AVX instruction set. Starting with TensorFlow 1.6, binaries use AVX instructions which may not run on older CPUs.The processors with AVX instructions are listed in this Wikipedia link.
-
A working Python 3.6 environment along with pip installed.
-
External modules installed. Instructions below :
-
To install the external modules, open CMD/Terminal and navigate inside the MagikMoments directory. Then run :
-
On windows :
pip install -r requirements.txt
-
On Linux :
pip3 install -r requirements.txt
-
-
The learned weights when the classifier was run can be found here. The entire model (architecture, layers, and weights) can be found here.
-
To execute it, open CMD/Terminal, navigate inside
MagikMoments/src/directory and run :- On Windows :
python classifier.py
- On Linux :
python3 classifier.py
- On Windows :
Navigate to this Google Colaboratory Link, copy the jupyter notebook to your drive and run it.
Pre-requisites:
- An image with visible human(s).
- A sample video to test.
- Specify the video file to be used.
- The video file will be sent to
create_framesmethod which is a generator function yielding frames. - The frame from
create_frameswill be passed todetect_facemethod which will check for faces and display if any found. - The faces from
detect_facewill be passed tocheck_emotionmethod which will check if the face is happy and store timestamp. - The method
cut_momentswill be called, which will cut 5-sec clips will be cut for each moment (2.5 before, 2.5 after) and store clips objects into a array. - That array will be passed to
combine_clips, which will combine all those clips into a final video.
Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.