This documents how to self-host tracekit on a cloud server as a single individual. The setup is:
- Docker container running the Flask web app on HTTP (port 5000), bound to localhost only
- Configuration stored in PostgreSQL — no config file required; use the Settings UI after first boot
- A reverse proxy in front doing SSL termination (not covered here — use Caddy, nginx, etc.)
The published image is ghcr.io/ckdake/tracekit:latest.
- A Linux server (VPS, EC2, Droplet, etc.) with Docker installed
- A domain name pointed at your server's IP
Create a dedicated tracekit user with /opt/tracekit as its home directory, and add it to the docker group:
sudo useradd --system --home /opt/tracekit --create-home --shell /bin/bash tracekit
sudo usermod -aG docker tracekitThat's it for root. Switch to the tracekit user for everything from here on:
su - tracekitCapture the uid/gid — these go into .env so compose can run every container as this user:
echo "TRACEKIT_UID=$(id -u)" >> ~/.env
echo "TRACEKIT_GID=$(id -g)" >> ~/.envCreate the directories and copy in the compose file:
mkdir -p ~/data/activities ~/pgdata ~/redis
chown -R tracekit:tracekit ~/data ~/pgdata ~/redis
cp docker-compose.yml ~/docker-compose.ymlCreate a .env file:
touch ~/.env
chmod 600 ~/.envEdit ~/.env:
# Host uid/gid — all containers run as this user so bind-mount files
# are owned by tracekit:tracekit on the host. Set with:
# echo "TRACEKIT_UID=$(id -u)" >> ~/.env
# echo "TRACEKIT_GID=$(id -g)" >> ~/.env
TRACEKIT_UID=
TRACEKIT_GID=
# Flask session signing key — must be set and must be the same value across
# all web container replicas/workers. Without a consistent key, sessions
# encrypted by one worker cannot be decrypted by another and users will be
# randomly logged out. Generate once with:
# python3 -c "import secrets; print(secrets.token_hex(32))"
SESSION_KEY=change_me_to_a_long_random_string
# PostgreSQL — used by both the postgres container and the tracekit container.
# Choose a strong random password; it never needs to leave this file.
POSTGRES_PASSWORD=change_me_to_a_strong_random_passwordDocker Compose picks this up automatically from the working directory. All provider credentials (Strava, RideWithGPS, Garmin) are stored in the database and configured through the Settings UI — no credentials belong in this file.
Add your DSN to .env to enable Sentry. If unset, Sentry is completely disabled:
# Sentry error monitoring — leave unset to disable
SENTRY_DSN=https://<key>@o<org>.ingest.us.sentry.io/<project>
SENTRY_ENV=production
SENTRY_DEBUG=falseGet the DSN from your Sentry project under Settings → Client Keys (DSN).
Add the following to .env to enable the Subscription section on the Settings page. If any of the required keys are absent, the section is hidden entirely.
# The public URL users reach your tracekit instance at — used to build
# Stripe success/cancel/webhook callback URLs.
APP_URL=https://tracekit.example.com
# Stripe — all four values are required to enable subscriptions.
STRIPE_SECRET_KEY=sk_live_...
STRIPE_PUBLISHABLE_KEY=pk_live_...
STRIPE_WEBHOOK_SECRET=whsec_...
STRIPE_MONTHLY_PRICE_ID=price_...
STRIPE_ANNUAL_PRICE_ID=price_...-
Create your products & prices in the Stripe Dashboard → Product catalog:
- Create a product (e.g. "tracekit subscription").
- Add two recurring prices: one monthly, one annual.
- Copy each price's ID (starts with
price_) intoSTRIPE_MONTHLY_PRICE_ID/STRIPE_ANNUAL_PRICE_ID.
-
Get your API keys from Dashboard → Developers → API keys:
- Copy the Secret key (
sk_live_…) →STRIPE_SECRET_KEY. - Copy the Publishable key (
pk_live_…) →STRIPE_PUBLISHABLE_KEY.
- Copy the Secret key (
-
Create a webhook endpoint in Dashboard → Developers → Webhooks:
- Endpoint URL:
https://tracekit.example.com/api/stripe/webhook - Subscribe to these events:
checkout.session.completedcustomer.subscription.createdcustomer.subscription.updatedcustomer.subscription.deleted
- After saving, reveal the Signing secret (
whsec_…) →STRIPE_WEBHOOK_SECRET.
- Endpoint URL:
-
Enable the Customer Portal in Dashboard → Settings → Billing → Customer portal so users can cancel or change their plan without contacting you.
Use
sk_test_…/pk_test_…keys and test price IDs during development. Switch to live keys for production.
Tables are created automatically on every container start — the app retries the DB connection for up to 60 s, so containers can start in any order.
All provider credentials are stored in the database. Visit /settings to enter and update them. The instructions below are also shown on the Settings page next to each provider card.
No CLI step needed. Enter your email, password, and API key directly in the Settings UI under the RideWithGPS provider card.
-
Register the callback URL in your Strava developer app at strava.com/settings/api. Set the Authorization Callback Domain to the domain you are hosting tracekit on (e.g.
tracekit.example.com). The exact callback path used is/api/auth/strava/callback. -
In the Settings UI, enter your Strava
client_idandclient_secretunder the Strava provider card and save. -
Click "Authenticate with Strava" on the Strava provider card. A popup opens, redirects to Strava for authorization, and saves the tokens automatically on return.
Re-authenticate any time tokens expire by clicking the button again.
Strava's API terms require that Strava data is never used as a source of truth for other providers, and that all Strava data is deleted immediately when a user disconnects. TraceKit enforces this by setting "write_only": True on the Strava provider entry in tracekit/appconfig.py. The sync engine reads this flag and:
- never selects Strava as the authoritative provider when resolving name/equipment conflicts
- never propagates Strava-only activities outward to other providers
- excludes Strava activity records older than 7 days from correlation (per API TOS data freshness requirements)
- deletes all Strava activity data immediately on user disconnect (both via the Settings UI "Disconnect Strava" button and via the Strava deauthorization webhook)
If "write_only" is removed or set to False for the Strava provider in tracekit/appconfig.py, Strava will start behaving like any other provider (source-of-truth capable). Do not do this — it would violate the Strava API TOS.
Garmin uses garth for OAuth. Tokens are stored in the database and valid for roughly a year.
In the Settings UI, click Authenticate on the Garmin provider card. Enter your Garmin Connect email and password in the popup. If your account has MFA enabled, you will be prompted for the one-time code sent to your email. Tokens are saved automatically on success.
To re-authenticate after tokens expire, click the Re-authenticate button on the card.
As the tracekit user, everything lives under /opt/tracekit/:
cd /opt/tracekit
docker compose up -dOn first boot, visit http://your-domain/signup to create the admin account (the first user to register is automatically the admin). Then visit /settings to configure your timezone, enabled providers, and credentials. Configuration is stored in PostgreSQL and persists across restarts.
The compose file binds to 127.0.0.1:5000 only, so the port is not publicly exposed. Your reverse proxy connects to it internally.
Key volume mounts (defined in docker-compose.yml):
/opt/tracekit/data→/opt/tracekit/data(read-write) — activity files (FIT/GPX/TCX exports)/opt/tracekit/pgdata→ PostgreSQL data directory — all database files live here/opt/tracekit/redis→ Redis persistence directory
Config is stored in PostgreSQL (the
appconfigtable) and managed entirely through the Settings UI at/settings. No config file needs to be mounted or maintained on the host.
Services started by docker compose up -d:
| Container | Role |
|---|---|
tracekit |
Flask web app (port 5000) |
tracekit-worker |
Celery worker — runs pull jobs, concurrency=1 |
tracekit-beat |
Celery beat scheduler — fires the daily no-op heartbeat |
tracekit-db |
PostgreSQL 17 |
tracekit-redis |
Redis (broker + result backend) |
tracekit-flower |
Flower task monitor (port 5555, localhost only) |
To back up the database:
docker exec tracekit-db pg_dump -U tracekit tracekit > tracekit_backup_$(date +%Y%m%d).sqlVerify it's running:
docker compose ps
docker compose logs -f
curl http://127.0.0.1:5000/healthFlower (task queue observability) is available at http://127.0.0.1:5555 on the host. To expose it through your reverse proxy, add a vhost/location pointing at port 5555. Keep it behind auth — it shows all task history and can inspect results.
Triggering a pull from the UI: visit the /calendar page and click Pull on any month card. The job is enqueued immediately and runs in the worker container; Flower will show its progress and result.
Local network access (testing only): To reach the app from another device on your LAN, change the port binding in
docker-compose.ymlfrom"127.0.0.1:5000:5000"to"5000:5000". Do not use this in production — use a reverse proxy with SSL instead.
cd /opt/tracekit
docker compose pull
docker compose down && docker compose up -dThe container exposes a health endpoint:
curl http://127.0.0.1:5000/healthDocker also checks this automatically every 30 seconds (HEALTHCHECK is defined in the image). Check container health with:
docker inspect --format='{{.State.Health.Status}}' tracekit