Astro is the most unique and complete discord bot for temporary voice channels and voice roles!
This repository contains the code for the backend of the bot.
Other repositories:
Resources:
I initially built this bot for my friends Discord, but then it grew way beyond my expectations.
You can read the story of this project on my blog post!
Warning
I do not provide support for self-hosting the bot.
- JDK 17
- Docker
- IntelliJ (not necessary but recommended)
Everything runs via SpringBoot and there is a shared-core
module that contains the shared code between the services.
Code is mostly well documented / self-explanatory, so you should be able to take a look at it and get an idea of how things work quite easily.
This project contains four services:
The Discord bot itself, it receives events from Discord and responds to them.
All events are bridged from JDA (the Java library to interact with the Discord API) to SpringBoot.
Regarding interactions, like slash commands, buttons, menus and modal, their logic is almost all unified in some interaction
classes:
InteractionReplyHandler
: used by all interactions to submit replies to the userInteractionAction
: a type-independent interaction declaration, it's implementation can be a command, button, menu, etc...InteractionContext
: groups together everything needed by any type of interaction to run, has subclasses for interaction categories
It includes a small REST api for retrieving roles and channels that the dashboard needs to present the user with.
The dashboard doesn't directly query the bot service, but instead queries central-api
which in turn queries the bot service.
It interacts with:
- BigQuery: for storing statistics about the bot usage
- MongoDB: storage for user / guild settings
- Redis: volatile cache and storage for temporary voice channels data
A REST api for the bot, it's used by the dashboard to retrieve settings and data from the bot database.
The dashboard requests the list of roles and channels of a given guild, and to respond to this, this API queries the bot
service api.
Since they are both running on k8s and the bot is sharded and divided in pods, the central api needs to calculate the correct pod to send the request to.
This is done by calculating the shard id of the guild via the guild id, and then the pod by knowing the amount of shards and pods.
User authentication is managed via a combination of JWT tokens and session cookies.
Simple service that checks for expired Discord premium application entitlements and updates the bot database.
Nothing fancy, ran as a k8s cronjob.
A Discord bot that manages the support server of the bot, mainly used to apply the premium role to premium users.
Includes a REST api used by the bot
service, when it receives an entitlement event from Discord, it sends a request to the support bot to update the user's role.
BigQuery is used for gathering statistics about the bot usage, mainly commands used, guilds joined / left and temporary voice channels generated.
It is completely optional and you can skip configuring it if you don't need it.
If you instead want to use it, you need to:
-
Create an account and a project on Google Cloud
-
Enable BigQuery API
-
Create a dataset
-
Create 4 tables, all with the following partition settings: partitioned by
DAY
on fieldtimestamp
with no expiration and partition filter requiredCONNECTION_INVOCATION
Field Name Type Mode guild_id NUMERIC REQUIRED user_id NUMERIC REQUIRED connection_id NUMERIC REQUIRED timestamp TIMESTAMP REQUIRED GUILD_EVENT
Field Name Type Mode guild_id NUMERIC REQUIRED users_count INTEGER REQUIRED action STRING REQUIRED timestamp TIMESTAMP REQUIRED SLASH_COMMAND_INVOCATION
Field Name Type Mode name STRING REQUIRED guild_id NUMERIC REQUIRED channel_id NUMERIC REQUIRED user_id NUMERIC REQUIRED main_option_name STRING NULLABLE main_option_value STRING NULLABLE raw_options STRING NULLABLE timestamp TIMESTAMP REQUIRED TEMPORARY_VC_GENERATION
Field Name Type Mode guild_id NUMERIC REQUIRED user_id NUMERIC REQUIRED generator_id NUMERIC REQUIRED timestamp TIMESTAMP REQUIRED -
Create a service account with the
BigQuery Admin
permission (well that's a bit overkill so you can select the appropriate permissions if you need). -
Create and download a JSON key, you will be asked for a path to it in the .env files.
To configure authentication to BigQuery for local development, instead of using the JSON key, you can follow these instructions.
While for production, you should use the JSON key.
- Run Docker compose
docker compose -f docker/docker-compose-dev.yml up -d
- Create the development
dev.env
files
The/env
folder contains a.env.template
file for each service + 1 common.env.template
shared by all services.
Create adev.env
file for each service inside the/env
folder and, in each of them, copy both the content of/env/shared-core/.env.template
and the content of the.env.template
file of the service. - Fill the
dev.env
files, each variable has a comment explaining what it does. - If you forked the repo, update the
ghcrOrg
value ingradle.properties
to your GitHub username or organization name. - Run the services.
If using IntelliJ, you will be provided with four run configurations, one for each service, already configured to pick up the correct environment files.
All the services should be up and running at this point.
You can use some web dashboards for local dev with MongoDB and Redis:
Service | web dashboard |
---|---|
MongoDB | localhost:8081 |
Redis | localhost:8082 |
Caution
Editing documents with the MongoDB dashboard is not recommended as it tends to mess up data types!
You can access the OpenAPI documentation for each service at the following URLs:
Service | development url |
---|---|
bot | localhost:9000/swagger-ui.html |
central-api | localhost:9001/swagger-ui.html |
support-bot | localhost:9002/swagger-ui.html |
Warning
I do not provide support for self-hosting the bot.
- A fork of this repository
- Kubernetes cluster
- Redis instance
- MongoDB instance
- Semaphore account
- Sentry
- kubectl installed on your local machine and with access to your k8s cluster.
Update the ghcrOrg
value in gradle.properties
to your GitHub username or organization name.
Create the production prod.env
files (you can also use the development ones created in the previous section, just remember in the following steps that your files are called dev.env
).
The /env
folder contains a .env.template
file for each service + 1 common .env.template
shared by all services.
Create a prod.env
file for each service inside the /env
folder and, in each of them, copy both the content of /env/shared-core/.env.template
and the content of the .env.template
file of the service.
kubectl create namespace astro
kubectl -n astro create configmap bot-config --from-env-file=./env/bot/prod.env
kubectl -n astro create configmap central-api-config --from-env-file=./env/central-api/prod.env
kubectl -n astro create configmap entitlements-expiration-job-config --from-env-file=./env/entitlements-expiration-job/prod.env
kubectl -n astro create configmap support-bot-config --from-env-file=./env/support-bot/prod.env
Follow the instructions in the BigQuery section to obtain the service account JSON key.
You can skip this step if you don't wanna enable BigQuery.
kubectl -n astro create secret generic gcp-bigquery-creds --from-file=service-account-key.json
This allows your cluster to pull images from GitHub Container Registry.
Replace your_github_username
and your_github_token
with your GitHub username and token (token instructions).
EXPORT DOCKER_USERNAME=your_github_username
EXPORT DOCKER_PASSWORD=your_github_token
kubectl create secret docker-registry ghcr-secret --docker-server=ghcr.io --docker-username=$DOCKER_USERNAME --docker-password=$DOCKER_PASSWORD -n astro
Each service has a /chart
folder that contains Helm charts that get deployed on Kubernetes using Semaphore promotions.
You need to configure the value files, so for each service copy the /service/{service_name}/chart/template.values.yaml
to /service/{service_name}/chart/values.yaml
and fill the values.
- Create a new organization
- Fork this repository
- Create a new project in the newly created organization, using your forked repository
- Update the
ghcrOrg
value ingradle.properties
to your GitHub username or organization name. - Go into Organization settings > Secrets and add the following secrets:
-
github
with these environment variables:Environment Variable Description GITHUB_ACTOR GitHub username for actions GITHUB_TOKEN GitHub token for authentication, instructions -
sentry
with these environment variables:Environment Variable Description SENTRY_AUTH_TOKEN Sentry auth token, instructions® -
kube
with the following configuration files:Configuration Files Description /home/semaphore/.kube/config
Upload the kubeconfig file for your k8s cluster -
for each service, create a secret using the same name as the service and add in
Configuration Files
:/home/semaphore/.values/{service-name}/production.yaml
as the path (replace{service-name}
) and provide thevalues.yaml
file to it (you should have configured them previously inservices/{service-name}/charts/values.yaml
).
-
Now when you commit to any branch, Semaphore will automatically:
- build the Docker image for each service
- upload it to GitHub Container Registry
- give you a button to deploy the new image to your Kubernetes cluster
If you have an idea, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Distributed under the AGPL-3.0 license. See LICENSE.txt
for more information.