|
15 | 15 | "source": [ |
16 | 16 | "## How to get started\n", |
17 | 17 | "\n", |
18 | | - "1. Clone this repository to your local machine.\n", |
| 18 | + "1. If you are attending an instructor lead workshop or deployed the workshop infrastructure using the provided [CloudFormation Template](https://raw.githubusercontent.com/aws-samples/prompt-engineering-with-anthropic-claude-v-3/main/cloudformation/workshop-v1-final-cfn.yml) you can proceed to step 2, otherwise you will need to download the workshop [GitHub Repository](https://github.com/aws-samples/prompt-engineering-with-anthropic-claude-v-3) to your local machine.\n", |
19 | 19 | "\n", |
20 | 20 | "2. Install the required dependencies by running the following command:\n", |
21 | 21 | " " |
|
31 | 31 | { |
32 | 32 | "cell_type": "code", |
33 | 33 | "execution_count": null, |
34 | | - "metadata": {}, |
| 34 | + "metadata": { |
| 35 | + "tags": [] |
| 36 | + }, |
35 | 37 | "outputs": [], |
36 | 38 | "source": [ |
37 | 39 | "%pip install -qU pip\n", |
38 | | - "%pip install -qr ../requirements.txt" |
| 40 | + "%pip install -qUr requirements.txt --force-reinstall" |
39 | 41 | ] |
40 | 42 | }, |
41 | 43 | { |
42 | 44 | "cell_type": "markdown", |
43 | 45 | "metadata": {}, |
44 | 46 | "source": [ |
45 | | - "3. Restart the kernel after installing dependencies" |
46 | | - ] |
47 | | - }, |
48 | | - { |
49 | | - "cell_type": "code", |
50 | | - "execution_count": null, |
51 | | - "metadata": {}, |
52 | | - "outputs": [], |
53 | | - "source": [ |
54 | | - "# restart kernel\n", |
55 | | - "from IPython.core.display import HTML\n", |
56 | | - "HTML(\"<script>Jupyter.notebook.kernel.restart()</script>\")" |
57 | | - ] |
58 | | - }, |
59 | | - { |
60 | | - "cell_type": "markdown", |
61 | | - "metadata": {}, |
62 | | - "source": [ |
63 | | - "4. Run the notebook cells in order, following the instructions provided." |
| 47 | + "3. Run the notebook cells in order, following the instructions provided." |
64 | 48 | ] |
65 | 49 | }, |
66 | 50 | { |
|
77 | 61 | "\n", |
78 | 62 | "- When you reach the bottom of a tutorial page, navigate to the next numbered file in the folder, or to the next numbered folder if you're finished with the content within that chapter file.\n", |
79 | 63 | "\n", |
80 | | - "### The Anthropic SDK & the Messages API\n", |
81 | | - "We will be using the [Anthropic python SDK](https://docs.anthropic.com/claude/reference/client-sdks) and the [Messages API](https://docs.anthropic.com/claude/reference/messages_post) throughout this tutorial. \n", |
| 64 | + "### The Boto3 SDK & the Converse API\n", |
| 65 | + "We will be using the [Amazon Boto3 SDK](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-runtime.html) and the [Converse API](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-runtime/client/converse.html) throughout this tutorial. \n", |
82 | 66 | "\n", |
83 | 67 | "Below is an example of what running a prompt will look like in this tutorial. First, we create `get_completion`, which is a helper function that sends a prompt to Claude and returns Claude's generated response. Run that cell now." |
84 | 68 | ] |
|
97 | 81 | "outputs": [], |
98 | 82 | "source": [ |
99 | 83 | "import boto3\n", |
100 | | - "session = boto3.Session() # create a boto3 session to dynamically get and set the region name\n", |
101 | | - "AWS_REGION = session.region_name\n", |
102 | | - "print(\"AWS Region:\", AWS_REGION)\n", |
103 | | - "MODEL_NAME = \"anthropic.claude-3-haiku-20240307-v1:0\"\n", |
| 84 | + "import json\n", |
| 85 | + "from datetime import datetime\n", |
| 86 | + "from botocore.exceptions import ClientError\n", |
104 | 87 | "\n", |
105 | | - "%store MODEL_NAME\n", |
106 | | - "%store AWS_REGION" |
| 88 | + "session = boto3.Session()\n", |
| 89 | + "region = session.region_name" |
| 90 | + ] |
| 91 | + }, |
| 92 | + { |
| 93 | + "cell_type": "code", |
| 94 | + "execution_count": null, |
| 95 | + "metadata": {}, |
| 96 | + "outputs": [], |
| 97 | + "source": [ |
| 98 | + "#modelId = 'anthropic.claude-3-sonnet-20240229-v1:0'\n", |
| 99 | + "modelId = 'anthropic.claude-3-haiku-20240307-v1:0'\n", |
| 100 | + "\n", |
| 101 | + "%store modelId\n", |
| 102 | + "%store region\n", |
| 103 | + "\n", |
| 104 | + "print(f'Using modelId: {modelId}')\n", |
| 105 | + "print('Using region: ', region)\n", |
| 106 | + "\n", |
| 107 | + "bedrock_client = boto3.client(service_name = 'bedrock-runtime', region_name = region,)" |
107 | 108 | ] |
108 | 109 | }, |
109 | 110 | { |
|
116 | 117 | { |
117 | 118 | "cell_type": "code", |
118 | 119 | "execution_count": null, |
119 | | - "metadata": {}, |
| 120 | + "metadata": { |
| 121 | + "tags": [] |
| 122 | + }, |
120 | 123 | "outputs": [], |
121 | 124 | "source": [ |
122 | | - "import boto3\n", |
123 | | - "import json\n", |
| 125 | + "def get_completion(prompt, system_prompt=None):\n", |
| 126 | + " # Define the inference configuration\n", |
| 127 | + " inference_config = {\n", |
| 128 | + " \"temperature\": 0.0, # Set the temperature for generating diverse responses\n", |
| 129 | + " \"maxTokens\": 200 # Set the maximum number of tokens to generate\n", |
| 130 | + " }\n", |
| 131 | + " # Define additional model fields\n", |
| 132 | + " additional_model_fields = {\n", |
| 133 | + " \"top_p\": 1, # Set the top_p value for nucleus sampling\n", |
| 134 | + " }\n", |
| 135 | + " # Create the converse method parameters\n", |
| 136 | + " converse_api_params = {\n", |
| 137 | + " \"modelId\": modelId, # Specify the model ID to use\n", |
| 138 | + " \"messages\": [{\"role\": \"user\", \"content\": [{\"text\": prompt}]}], # Provide the user's prompt\n", |
| 139 | + " \"inferenceConfig\": inference_config, # Pass the inference configuration\n", |
| 140 | + " \"additionalModelRequestFields\": additional_model_fields # Pass additional model fields\n", |
| 141 | + " }\n", |
| 142 | + " # Check if system_text is provided\n", |
| 143 | + " if system_prompt:\n", |
| 144 | + " # If system_text is provided, add the system parameter to the converse_params dictionary\n", |
| 145 | + " converse_api_params[\"system\"] = [{\"text\": system_prompt}]\n", |
| 146 | + "\n", |
| 147 | + " # Send a request to the Bedrock client to generate a response\n", |
| 148 | + " try:\n", |
| 149 | + " response = bedrock_client.converse(**converse_api_params)\n", |
124 | 150 | "\n", |
125 | | - "bedrock = boto3.client('bedrock-runtime',region_name=AWS_REGION)\n", |
126 | | - "\n", |
127 | | - "def get_completion(prompt):\n", |
128 | | - " body = json.dumps(\n", |
129 | | - " {\n", |
130 | | - " \"anthropic_version\": '',\n", |
131 | | - " \"max_tokens\": 2000,\n", |
132 | | - " \"messages\": [{\"role\": \"user\", \"content\": prompt}],\n", |
133 | | - " \"temperature\": 0.0,\n", |
134 | | - " \"top_p\": 1,\n", |
135 | | - " \"system\": ''\n", |
136 | | - " }\n", |
137 | | - " )\n", |
138 | | - " response = bedrock.invoke_model(body=body, modelId=MODEL_NAME)\n", |
139 | | - " response_body = json.loads(response.get('body').read())\n", |
140 | | - "\n", |
141 | | - " return response_body.get('content')[0].get('text')" |
| 151 | + " # Extract the generated text content from the response\n", |
| 152 | + " text_content = response['output']['message']['content'][0]['text']\n", |
| 153 | + "\n", |
| 154 | + " # Return the generated text content\n", |
| 155 | + " return text_content\n", |
| 156 | + "\n", |
| 157 | + " except ClientError as err:\n", |
| 158 | + " message = err.response['Error']['Message']\n", |
| 159 | + " print(f\"A client error occured: {message}\")" |
142 | 160 | ] |
143 | 161 | }, |
144 | 162 | { |
|
153 | 171 | { |
154 | 172 | "cell_type": "code", |
155 | 173 | "execution_count": null, |
156 | | - "metadata": {}, |
| 174 | + "metadata": { |
| 175 | + "tags": [] |
| 176 | + }, |
157 | 177 | "outputs": [], |
158 | 178 | "source": [ |
159 | 179 | "# Prompt\n", |
|
167 | 187 | "cell_type": "markdown", |
168 | 188 | "metadata": {}, |
169 | 189 | "source": [ |
170 | | - "The `MODEL_NAME` and `AWS_REGION` variables defined earlier will be used throughout the tutorial. Just make sure to run the cells for each tutorial page from top to bottom." |
| 190 | + "The `modelId` and `region` variables defined earlier will be used throughout the tutorial. Just make sure to run the cells for each tutorial page from top to bottom." |
171 | 191 | ] |
172 | 192 | } |
173 | 193 | ], |
174 | 194 | "metadata": { |
175 | 195 | "kernelspec": { |
176 | | - "display_name": "py310", |
| 196 | + "display_name": "conda_tensorflow2_p310", |
177 | 197 | "language": "python", |
178 | | - "name": "python3" |
| 198 | + "name": "conda_tensorflow2_p310" |
179 | 199 | }, |
180 | 200 | "language_info": { |
181 | 201 | "codemirror_mode": { |
|
187 | 207 | "name": "python", |
188 | 208 | "nbconvert_exporter": "python", |
189 | 209 | "pygments_lexer": "ipython3", |
190 | | - "version": "3.12.0" |
| 210 | + "version": "3.10.14" |
191 | 211 | } |
192 | 212 | }, |
193 | 213 | "nbformat": 4, |
194 | | - "nbformat_minor": 2 |
| 214 | + "nbformat_minor": 4 |
195 | 215 | } |
0 commit comments