From 65e43a910b3a02f6589096d1a354f9cdb31caa47 Mon Sep 17 00:00:00 2001 From: Anthony Anyanwu Date: Sun, 12 Jan 2025 01:17:37 -0800 Subject: [PATCH] Update risks.en.mdx minor updates to the text --- pages/risks.en.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/risks.en.mdx b/pages/risks.en.mdx index 3f1f3c451..acf63acaf 100644 --- a/pages/risks.en.mdx +++ b/pages/risks.en.mdx @@ -5,9 +5,9 @@ import {Cards, Card} from 'nextra-theme-docs' import {FilesIcon} from 'components/icons' import ContentFileNames from 'components/ContentFileNames' -Well-crafted prompts can lead to effective used of LLMs for various tasks using techniques like few-shot learning and chain-of-thought prompting. As you think about building real-world applications on top of LLMs, it also becomes crucial to think about the misuses, risks, and safety practices involved with language models. +Well-crafted prompts can lead to effective use of LLMs for various tasks using techniques like few-shot learning and chain-of-thought prompting. As you think about building real-world applications on top of LLMs, it becomes crucial to consider the misuses, risks, and safety practices involved with language models. -This section focuses on highlighting some of the risks and misuses of LLMs via techniques like prompt injections. It also highlights harmful behaviors and how to potentially mitigate them via effective prompting techniques and tools like moderation APIs. Other topics of interest include generalizability, calibration, biases, social biases, and factuality to name a few. +This section highlights some of the risks and misuses of LLMs via techniques like prompt injections. It also highlights harmful behaviors and how to potentially mitigate them via effective prompting techniques and tools like moderation APIs. Other topics of interest include generalizability, calibration, biases, social biases, and factuality.