This repository hosts the code used for benchmarking forward gradients against canonical test functions, associated to this research paper. This was used in my research work at Lokad as part of my end-of-study internship for the MVA master.
We recommend you use a virtual environment. To install, run
$ cd [path-to-forward-repository]
$ pip install -e .
Then the data and performance profiles for the experiment can be generated with
$ python3 benchmark/make.py performance
$ python3 benchmark/figures.py performance
You can also generate accuracy profiles, not included in the paper, with
$ python3 benchmark/make.py accuracy
$ python3 benchmark/figures.py accuracy
This implementation relies on the following works, which I would like to credit and thank
- the autograd library for all things autodifferentiation
- the exhaustive collection of test functions implemented by Axel Thevenot