Skip to content

Latest commit

 

History

History
222 lines (146 loc) · 5.04 KB

DEVELOP.rst

File metadata and controls

222 lines (146 loc) · 5.04 KB

Run locally

  1. Install Python 3.8+, then create and activate a virtual environment.

    $ python -m venv my_venv
    $ source my_venv/bin/activate
    

    The steps below assume your virtual environment is activated. To deactivate the venv, just run deactivate at any time. Read the full Virtual Environment docs for details.

  2. Install Python dependencies

    pip install -r requirements.txt
    
  3. Install Node.js v14.16.0+.

  4. Install Node.js dependencies

    npm install
    
  5. Install .NET 7 SDK

    See Installing on linux for more details and troubleshooting.

    wget https://packages.microsoft.com/config/ubuntu/22.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
    sudo dpkg -i packages-microsoft-prod.deb
    rm packages-microsoft-prod.deb
    
    sudo apt-get update && \
    sudo apt-get install -y dotnet-sdk-7.0
    
  6. Install the following toolchains:

  7. Install Docker and docker-compose

    Note: On macOS, Docker containers are run inside a virtual machine. This incurs significant overhead and can skew results unpredictably.

  8. Install Synth

  9. Configure simulated latency. The instructions for this vary by operating system. On Linux, this can be achieved with tc:

    sudo tc qdisc add dev br-webapp-bench root netem delay 1ms
    
  10. Generate the dataset.

    $ make new-dataset
    
  11. Load the data into the test databases via $ make load. Alternatively, you can run only the loaders you care about:

    $ make load-django
    $ make load-edgedb
    $ make load-hasura
    $ make load-mongodb
    $ make load-postgres
    $ make load-prisma
    $ make load-sequelize
    $ make load-sqlalchemy
    $ make load-typeorm
    
  12. Compile runner files (Go, TypeScript): ``$ make compile`

  13. Run the JavaScript benchmarks

    First, run the following loaders:

    $ make load-typeorm
    $ make load-sequelize
    $ make load-postgres
    $ make load-prisma
    $ make load-edgedb
    

    Then run the benchmarks:

    $ make run-js
    

    The results will be generated into docs/js.html.

  14. Run the Python benchmarks

    First, run the following loaders:

    $ make load-postgres
    $ make load-django
    $ make load-sqlalchemy
    $ make load-edgedb
    

    Then run the benchmarks:

    $ make run-py
    

    The results will be generated into docs/py.html.

  15. Run the .NET benchmarks

    First, run the following loaders:

    $ make load-postgres
    $ make load-edgedb
    

    Then run the benchmarks:

    $ make run-dotnet
    

    The results will be generated into docs/dotnet.html.

  16. Run the SQL benchmarks

    First, run the following loaders:

    $ make load-postgres
    $ make load-edgedb
    

    Then run the benchmarks:

    $ make run-sql
    

    The results will be generated into docs/sql.html.

  17. [Optional] Run a custom benchmark

    The benchmarking system can be customized by directly running python bench.py.

    python bench.py
      --html <path/to/file>
      --json <path/to/file>
      --concurrency <seconds>
      --query <query_name>
      [targets]
    

    The query_name must be one of the folowing options. To pick multiple queries, you can use the --query flag multiple times.

    • get_movie
    • get_person
    • get_user
    • update_movie
    • insert_user
    • insert_movie
    • insert_movie_plus

    Specify a custom set of targets with a space-separated list of the following options:

    • typeorm
    • sequelize
    • prisma
    • edgedb_js_qb
    • django
    • django_restfw
    • mongodb
    • sqlalchemy
    • edgedb_py_sync
    • edgedb_py_json
    • edgedb_py_json_async
    • edgedb_go
    • edgedb_go_json
    • edgedb_go_graphql
    • edgedb_go_http
    • edgedb_js
    • edgedb_js_json
    • postgres_asyncpg
    • postgres_psycopg
    • postgres_pq
    • postgres_pgx
    • postgres_pg
    • postgres_hasura_go

    You can see a full list of options like so:

    python bench.py --help