Skip to content

Commit 976c241

Browse files
Add fallback "Click here to deploy"
1 parent aaab47a commit 976c241

11 files changed

+156
-156
lines changed

README.md

+17-17
Large diffs are not rendered by default.

automatic1111-stable-diffusion-ui.ipynb

+130-130
Large diffs are not rendered by default.

controlnet.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@
5050
"\n",
5151
"Click the badge below to get your preconfigured instance:\n",
5252
"\n",
53-
"[![](https://uohmivykqgnnbiouffke.supabase.co/storage/v1/object/public/landingpage/brevdeploynavy.svg)](https://console.brev.dev/environment/new?instance=A10G:g5.xlarge&diskStorage=120&name=controlnet&file=https://github.com/brevdev/notebooks/raw/main/controlnet.ipynb&python=3.10&cuda=12.0.1)\n",
53+
"[![ Click here to deploy.](https://uohmivykqgnnbiouffke.supabase.co/storage/v1/object/public/landingpage/brevdeploynavy.svg)](https://console.brev.dev/environment/new?instance=A10G:g5.xlarge&diskStorage=120&name=controlnet&file=https://github.com/brevdev/notebooks/raw/main/controlnet.ipynb&python=3.10&cuda=12.0.1)\n",
5454
"\n",
5555
"Once you've checked out your machine and landed in your instance page, select the specs you'd like (I used **Python 3.10 and CUDA 12.0.1**; these should be preconfigured for you if you use the badge above) and click the \"Build\" button to build your verb container. Give this a few minutes.\n",
5656
"\n",

gguf-export.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@
3838
"\n",
3939
"I used a GPU and dev environment from [brev.dev](https://brev.dev). Click the badge below to get your preconfigured instance:\n",
4040
"\n",
41-
"[![](https://uohmivykqgnnbiouffke.supabase.co/storage/v1/object/public/landingpage/brevdeploynavy.svg)](https://console.brev.dev/environment/new?instance=A10G:g5.xlarge&diskStorage=256&name=gguf-export&file=https://github.com/brevdev/notebooks/raw/main/gguf-export.ipynb&python=3.10&cuda=12.0.1)\n",
41+
"[![ Click here to deploy.](https://uohmivykqgnnbiouffke.supabase.co/storage/v1/object/public/landingpage/brevdeploynavy.svg)](https://console.brev.dev/environment/new?instance=A10G:g5.xlarge&diskStorage=256&name=gguf-export&file=https://github.com/brevdev/notebooks/raw/main/gguf-export.ipynb&python=3.10&cuda=12.0.1)\n",
4242
"\n",
4343
"Once you've checked out your machine and landed in your instance page, select the specs you'd like (I used Python 3.10 and CUDA 12.0.1; these should be preconfigured for you if you use the badge above) and click the \"Build\" button to build your verb container. Give this a few minutes.\n",
4444
"\n",

julia-install.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
"\n",
3535
"Click the badge below to get a preconfigured instance:\n",
3636
"\n",
37-
"[![](https://uohmivykqgnnbiouffke.supabase.co/storage/v1/object/public/landingpage/brevdeploynavy.svg)](https://console.brev.dev/environment/new?instance=A10G:g5.xlarge&diskStorage=256&name=julia-install&file=https://github.com/brevdev/notebooks/raw/main/julia-install.ipynb&python=3.10&cuda=12.1.1)\n",
37+
"[![ Click here to deploy.](https://uohmivykqgnnbiouffke.supabase.co/storage/v1/object/public/landingpage/brevdeploynavy.svg)](https://console.brev.dev/environment/new?instance=A10G:g5.xlarge&diskStorage=256&name=julia-install&file=https://github.com/brevdev/notebooks/raw/main/julia-install.ipynb&python=3.10&cuda=12.1.1)\n",
3838
"\n",
3939
"This is for a A10G GPU. You can swap out this machine, or toggle on the \"CPU\" option if you don't need a GPU.\n",
4040
"\n",

llama2-finetune-own-data.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@
5151
"\n",
5252
"I used a GPU and dev environment from [brev.dev](https://brev.dev). Click the badge below to get your preconfigured instance:\n",
5353
"\n",
54-
"[![](https://uohmivykqgnnbiouffke.supabase.co/storage/v1/object/public/landingpage/brevdeploynavy.svg)](https://console.brev.dev/environment/new?instance=A10G:g5.xlarge&diskStorage=256&name=llama2-finetune-own-data&file=https://github.com/brevdev/notebooks/raw/main/llama2-finetune-own-data.ipynb&python=3.10&cuda=12.0.1)\n",
54+
"[![ Click here to deploy.](https://uohmivykqgnnbiouffke.supabase.co/storage/v1/object/public/landingpage/brevdeploynavy.svg)](https://console.brev.dev/environment/new?instance=A10G:g5.xlarge&diskStorage=256&name=llama2-finetune-own-data&file=https://github.com/brevdev/notebooks/raw/main/llama2-finetune-own-data.ipynb&python=3.10&cuda=12.0.1)\n",
5555
"\n",
5656
"The whole thing cost me $1 using this instance. A single A10G (as linked) or L4 should be enough for this dataset; anything with >= 24GB GPU Memory. You may need more GPUs and/or Memory if your sequence max_length is larger than 512.\n",
5757
"\n",

llama2-finetune.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@
5959
"\n",
6060
"I used a GPU and dev environment from [brev.dev](https://brev.dev). The whole thing cost me $1 using a 1xA10G 24GB. Click the badge below to get your preconfigured instance:\n",
6161
"\n",
62-
"[![](https://uohmivykqgnnbiouffke.supabase.co/storage/v1/object/public/landingpage/brevdeploynavy.svg)](https://console.brev.dev/environment/new?instance=A10G:g5.xlarge&diskStorage=256&name=llama2-finetune-own-data&file=https://github.com/brevdev/notebooks/raw/main/llama2-finetune-own-data.ipynb&python=3.10&cuda=12.0.1)\n",
62+
"[![ Click here to deploy.](https://uohmivykqgnnbiouffke.supabase.co/storage/v1/object/public/landingpage/brevdeploynavy.svg)](https://console.brev.dev/environment/new?instance=A10G:g5.xlarge&diskStorage=256&name=llama2-finetune-own-data&file=https://github.com/brevdev/notebooks/raw/main/llama2-finetune-own-data.ipynb&python=3.10&cuda=12.0.1)\n",
6363
"\n",
6464
"A single A10G (as linked) or L4 should be enough for this dataset; anything with >= 24GB GPU Memory. You may need more GPUs and/or Memory if your sequence max_length is larger than 512.\n",
6565
"\n",

mistral-finetune.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@
5757
"\n",
5858
"I used a GPU and dev environment from [brev.dev](https://brev.dev). The whole thing cost me $1 using a 1xA10G 24GB. Click the badge below to get your preconfigured instance:\n",
5959
"\n",
60-
"[![](https://uohmivykqgnnbiouffke.supabase.co/storage/v1/object/public/landingpage/brevdeploynavy.svg)](https://console.brev.dev/environment/new?instance=A10G:g5.xlarge&diskStorage=256&name=mistral-finetune&file=https://github.com/brevdev/notebooks/raw/main/mistral-finetune.ipynb&python=3.10&cuda=12.0.1)\n",
60+
"[![ Click here to deploy.](https://uohmivykqgnnbiouffke.supabase.co/storage/v1/object/public/landingpage/brevdeploynavy.svg)](https://console.brev.dev/environment/new?instance=A10G:g5.xlarge&diskStorage=256&name=mistral-finetune&file=https://github.com/brevdev/notebooks/raw/main/mistral-finetune.ipynb&python=3.10&cuda=12.0.1)\n",
6161
"\n",
6262
"A single A10G (as linked) or L4 should be enough for this dataset; anything with >= 24GB GPU Memory. You may need more GPUs and/or Memory if your sequence max_length is larger than 512.\n",
6363
"\n",

mixtral-finetune.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@
4848
"\n",
4949
"I used a GPU and dev environment from [brev.dev](https://brev.dev). Click the badge below to get your preconfigured instance:\n",
5050
"\n",
51-
"[![](https://uohmivykqgnnbiouffke.supabase.co/storage/v1/object/public/landingpage/brevdeploynavy.svg)](https://console.brev.dev/environment/new?instance=T4:g4dn.12xlarge&diskStorage=512&name=mixtral-finetune&file=https://github.com/brevdev/notebooks/raw/main/mixtral-finetune.ipynb&python=3.10&cuda=12.0.1)\n",
51+
"[![ Click here to deploy.](https://uohmivykqgnnbiouffke.supabase.co/storage/v1/object/public/landingpage/brevdeploynavy.svg)](https://console.brev.dev/environment/new?instance=T4:g4dn.12xlarge&diskStorage=512&name=mixtral-finetune&file=https://github.com/brevdev/notebooks/raw/main/mixtral-finetune.ipynb&python=3.10&cuda=12.0.1)\n",
5252
"\n",
5353
"A 4xT4 (as linked) with 16 GPU Memory per GPU was enough for me. At the time of writing this, it costs $3.91/hour, and it trained for about 2 hours for 100 training steps. You may need more GPUs and/or Memory if your sequence max_length is larger than 320 (you'll read more about this below).\n",
5454
"\n",

oobabooga.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@
4343
"\n",
4444
"Click the badge below to get your preconfigured instance:\n",
4545
"\n",
46-
"[![](https://uohmivykqgnnbiouffke.supabase.co/storage/v1/object/public/landingpage/brevdeploynavy.svg)](https://console.brev.dev/environment/new?instance=A10G:g5.xlarge&diskStorage=120&name=oobabooga&file=https://github.com/brevdev/notebooks/raw/main/oobabooga.ipynb&python=3.10&cuda=12.0.1)\n",
46+
"[![ Click here to deploy.](https://uohmivykqgnnbiouffke.supabase.co/storage/v1/object/public/landingpage/brevdeploynavy.svg)](https://console.brev.dev/environment/new?instance=A10G:g5.xlarge&diskStorage=120&name=oobabooga&file=https://github.com/brevdev/notebooks/raw/main/oobabooga.ipynb&python=3.10&cuda=12.0.1)\n",
4747
"\n",
4848
"Once you've checked out your machine and landed in your instance page, select the specs you'd like (I used **Python 3.10 and CUDA 12.0.1**; these should be preconfigured for you if you use the badge above) and click the \"Build\" button to build your verb container. Give this a few minutes.\n",
4949
"\n",

zephyr-chatbot.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@
4848
"\n",
4949
"Click the badge below to get your preconfigured instance:\n",
5050
"\n",
51-
"[![](https://uohmivykqgnnbiouffke.supabase.co/storage/v1/object/public/landingpage/brevdeploynavy.svg)](https://console.brev.dev/environment/new?instance=A10G:g5.2xlarge&diskStorage=256&name=chatbot&file=https://github.com/brevdev/notebooks/raw/main/llama2-finetune.ipynb&python=3.10&cuda=12.0.1)\n",
51+
"[![ Click here to deploy.](https://uohmivykqgnnbiouffke.supabase.co/storage/v1/object/public/landingpage/brevdeploynavy.svg)](https://console.brev.dev/environment/new?instance=A10G:g5.2xlarge&diskStorage=256&name=chatbot&file=https://github.com/brevdev/notebooks/raw/main/llama2-finetune.ipynb&python=3.10&cuda=12.0.1)\n",
5252
"\n",
5353
"Once you've checked out your machine and landed in your instance page, select the specs you'd like (I used **Python 3.10 and CUDA 12.0.1**; these should be preconfigured for you if you use the badge above) and click the \"Build\" button to build your verb container. Give this a few minutes.\n",
5454
"\n",

0 commit comments

Comments
 (0)