Skip to content

Commit 635704a

Browse files
committed
initial_commit
0 parents  commit 635704a

File tree

4 files changed

+523
-0
lines changed

4 files changed

+523
-0
lines changed

README.md

+298
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,298 @@
1+
# Install Notes
2+
3+
Given an input phrase, this project asks the Llama-2-7B LLM AI model to generate one or more synonyms of each word. Then with each synonym, it generates a new phrase by replacing each word. Finally with each new phrase it asks the model to rephrase it, and all the new phrases are returned.
4+
5+
6+
[[_TOC_]]
7+
8+
## Developer Environment Setup
9+
10+
To get started with development, you will need to have cloned this repo to your computer.
11+
12+
### Pre-requisites
13+
14+
- Amazon EC2 Instance with GPU, g4dn.xlarge with 30GB storage minimum.
15+
- Linux Host or WSL on a Windows machine.
16+
17+
18+
### Create an EC2 Instance with GPU
19+
20+
g4dn.xlarge
21+
4 vCPU 16 GiB Memory
22+
GPU: NVIDIA Corporation TU104GL [Tesla T4] (rev a1)
23+
https://www.techpowerup.com/gpu-specs/tesla-t4.c3316
24+
https://www.techpowerup.com/gpu-specs/nvidia-tu104.g854#:~:text=It%20features%203072%20shading%20units,contains%2048%20raytracing%20acceleration%20cores.
25+
16GB DDR6 - Memory Bus 256 bit - Bandwidth 320.0 GB/s - 585 MHz
26+
27+
Storage: 100 GiB General purpose SSD (gp3)
28+
OS: Ubuntu Server 22.04.4 LTS (Jammy Jellyfish) 64-bit (x86)
29+
30+
Software Image (AMI)
31+
Canonical, Ubuntu, 22.04 LTS, amd64 jammy image build on 2024-03-01
32+
ami-0b8b44ec9a8f90422
33+
34+
Generate a key pair and download your public key.
35+
36+
Start your EC2 instance and copy the Public DNS address.
37+
38+
39+
### Connect to EC2 Instance
40+
41+
Copy your public key to your target folder (llama2_key.pem)
42+
43+
```
44+
sudo chmod 400 llama2_key.pem
45+
sudo ssh -i "llama2_key.pem" ubuntu@EC2_Public_DNS
46+
```
47+
48+
Example:
49+
```
50+
sudo ssh -i llama2_key.pem [email protected]
51+
```
52+
(From WSL sudo is required to run ssh)
53+
54+
55+
### Setup Target Environment
56+
57+
```
58+
sudo apt-get update
59+
sudo apt -y install python3 python3-dev python3-pip python3-virtualenv gcc build-essential git libffi-dev
60+
python3 -m pip install --upgrade pip
61+
62+
virtualenv llama_env
63+
source llama_env/bin/activate
64+
```
65+
66+
### Register and Download Llama2
67+
68+
Visit the Meta website and register to download the model/s.:
69+
https://llama.meta.com/llama-downloads/
70+
71+
```
72+
git clone https://github.com/meta-llama/llama.git
73+
cd llama
74+
```
75+
76+
Once registered, you will get an email with a URL to download the models. You will need this URL when you run the download.sh script:
77+
```
78+
./download.sh
79+
```
80+
--> Input: 7B-chat
81+
82+
83+
### Install packages
84+
85+
```
86+
pip install -e .
87+
```
88+
89+
### Test CUDA is available with torch library
90+
91+
<pre>
92+
<code class="language-python">
93+
python
94+
Python 3.8.10 (default, Nov 26 2021, 20:14:08)
95+
[GCC 9.3.0] on linux
96+
Type "help", "copyright", "credits" or "license" for more information.
97+
>>> import torch
98+
>>> torch.cuda.is_available()
99+
False
100+
>>> torch._C._cuda_getDeviceCount()
101+
0
102+
>>>
103+
</code>
104+
</pre>
105+
106+
107+
### Install Nvidia Driver
108+
109+
```
110+
sudo /sbin/update-pciids
111+
lspci | grep -i nvidia
112+
00:1e.0 3D controller: NVIDIA Corporation TU104GL [Tesla T4] (rev a1)
113+
114+
sudo apt install ubuntu-drivers-common
115+
sudo apt install alsa-utils
116+
sudo ubuntu-drivers devices
117+
== /sys/devices/pci0000:00/0000:00:1e.0 ==
118+
modalias : pci:v000010DEd00001EB8sv000010DEsd000012A2bc03sc02i00
119+
vendor : NVIDIA Corporation
120+
model : TU104GL [Tesla T4]
121+
driver : nvidia-driver-550-server - distro non-free
122+
driver : nvidia-driver-545 - distro non-free
123+
driver : nvidia-driver-550 - distro non-free recommended
124+
driver : nvidia-driver-535 - distro non-free
125+
driver : nvidia-driver-418-server - distro non-free
126+
driver : nvidia-driver-450-server - distro non-free
127+
driver : nvidia-driver-470-server - distro non-free
128+
driver : nvidia-driver-470 - distro non-free
129+
driver : nvidia-driver-535-server - distro non-free
130+
driver : xserver-xorg-video-nouveau - distro free builtin
131+
132+
→ driver : nvidia-driver-550 - distro non-free recommended
133+
134+
sudo apt install nvidia-driver-550
135+
```
136+
137+
### Reboot Instance
138+
139+
```
140+
sudo halt
141+
→ Stop Instance
142+
→ Start Instance
143+
```
144+
145+
### Connect to EC2 Instance
146+
147+
```
148+
sudo ssh -i "llama2_key.pem" ubuntu@Public_DNS
149+
```
150+
151+
152+
### Test CUDA is available with torch library
153+
154+
```
155+
cd /home/ubuntu/$YOUR_PATH/llama
156+
source llama_env/bin/activate
157+
```
158+
159+
<pre>
160+
<code class="language-python">
161+
(llama_env) ubuntu@ip-172-31-7-235:~/IA/llama-2/llama$ python
162+
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] on linux
163+
Type "help", "copyright", "credits" or "license" for more information.
164+
>>> import torch
165+
>>> torch.cuda.is_available()
166+
True
167+
>>> torch._C._cuda_getDeviceCount()
168+
1
169+
>>>
170+
</code>
171+
</pre>
172+
173+
174+
### Run the model locally
175+
176+
```
177+
chmod +x rephrase_ai.sh
178+
./rephrase_ai.sh
179+
```
180+
181+
182+
### Sample Test
183+
184+
185+
```
186+
🦙 Enter an english phrase: you are the best
187+
188+
💬 [Rephrase] ------------> You stand out from the crowd.
189+
190+
💬 [Rephrase] ------------> You are truly unique.
191+
192+
💬 [Rephrase] ------------> You are an individual with your own special qualities.
193+
194+
💬 [Rephrase] ------------> You are one of a kind.
195+
196+
💬 [Rephrase] ------------> You are extraordinary.
197+
198+
💬 [Rephrase] ------------> You are a rare find.
199+
200+
💬 [Rephrase] ------------> You are a breath of fresh air.
201+
202+
💬 [Rephrase] ------------> You are a shining star.
203+
204+
💬 [Rephrase] ------------> You are a cut above the rest.
205+
206+
💬 [Rephrase] ------------> You are a true original.
207+
208+
💬 [Rephrase] ------------> You are exceptional.
209+
210+
💬 [Rephrase] ------------> You are remarkable.
211+
212+
💬 [Rephrase] ------------> You are impressive.
213+
214+
💬 [Rephrase] ------------> You are outstandingly talented.
215+
216+
💬 [Rephrase] ------------> You are truly remarkable.
217+
218+
💬 [Rephrase] ------------> You are an exceptional individual.
219+
220+
💬 [Rephrase] ------------> You are a standout.
221+
222+
💬 [Rephrase] ------------> You are a cut above the rest.
223+
224+
💬 [Rephrase] ------------> You are an extraordinary person.
225+
226+
💬 [Rephrase] ------------> You are a remarkable individual.
227+
228+
💬 [Rephrase] ------------> You are truly remarkable.
229+
230+
💬 [Rephrase] ------------> Your exceptional qualities are truly remarkable.
231+
232+
💬 [Rephrase] ------------> You have an incredible talent for being remarkable.
233+
234+
💬 [Rephrase] ------------> Your dedication and hard work make you truly remarkable.
235+
236+
💬 [Rephrase] ------------> You are an inspiration to be around, your remarkable qualities are evident in everything you do.
237+
238+
💬 [Rephrase] ------------> Your remarkable abilities and achievements are a testament to your hard work and dedication.
239+
240+
💬 [Rephrase] ------------> You have a unique gift for making a difference, your remarkable qualities are evident in everything you do.
241+
242+
💬 [Rephrase] ------------> You are a remarkable individual, your talents and abilities are truly exceptional.
243+
244+
💬 [Rephrase] ------------> Your remarkable qualities are what make you stand out from the crowd.
245+
246+
💬 [Rephrase] ------------> You have a remarkable ability to inspire and motivate others.
247+
248+
💬 [Rephrase] ------------> Your remarkable qualities are a reflection of your passion and dedication.
249+
250+
💬 [Rephrase] ------------> People are the best.
251+
252+
💬 [Rephrase] ------------> Individuals are the best.
253+
254+
💬 [Rephrase] ------------> Humans are the best.
255+
256+
💬 [Rephrase] ------------> Folks are the best.
257+
258+
💬 [Rephrase] ------------> Folk are the best.
259+
260+
💬 [Rephrase] ------------> Each person has their own unique strengths and abilities, making them an invaluable asset to any team or organization.
261+
262+
💬 [Rephrase] ------------> The diversity of individuals is what makes a group or team truly exceptional, as each person brings their own set of skills and perspectives to the table.
263+
264+
💬 [Rephrase] ------------> Individuality is what drives innovation and progress, as people are encouraged to think outside the box and bring their own ideas to the table.
265+
266+
💬 [Rephrase] ------------> The strength of a group lies in the diversity of its members, as each person brings their own unique experiences and talents to the table.
267+
268+
💬 [Rephrase] ------------> By valuing and embracing individuality, organizations can tap into the collective potential of their employees, leading to greater creativity, productivity, and success.
269+
```
270+
271+
272+
### Check GPU usage while running the model
273+
274+
On another terminal run:
275+
276+
```
277+
nvidia-smi
278+
Thu Apr 11 16:13:15 2024
279+
+-----------------------------------------------------------------------------------------+
280+
| NVIDIA-SMI 550.67 Driver Version: 550.67 CUDA Version: 12.4 |
281+
|-----------------------------------------+------------------------+----------------------+
282+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
283+
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
284+
| | | MIG M. |
285+
|=========================================+========================+======================|
286+
| 0 Tesla T4 Off | 00000000:00:1E.0 Off | 0 |
287+
| N/A 37C P0 68W / 70W | 14033MiB / 15360MiB | 99% Default |
288+
| | | N/A |
289+
+-----------------------------------------+------------------------+----------------------+
290+
291+
+-----------------------------------------------------------------------------------------+
292+
| Processes: |
293+
| GPU GI CI PID Type Process name GPU Memory |
294+
| ID ID Usage |
295+
|=========================================================================================|
296+
| 0 N/A N/A 1754 C .../llama-2/llama/llama_env/bin/python 14030MiB |
297+
+-----------------------------------------------------------------------------------------+
298+
```

0 commit comments

Comments
 (0)