Skip to content

Commit 5929ef8

Browse files
[ChatGPT] Enhance binding (openhab#17320)
Signed-off-by: Artur-Fedjukevits <[email protected]>
1 parent 7cab153 commit 5929ef8

18 files changed

+1404
-89
lines changed

bundles/org.openhab.binding.chatgpt/README.md

+51-20
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,39 @@
11
# ChatGPT Binding
22

3-
The openHAB ChatGPT Binding allows openHAB to communicate with the ChatGPT language model provided by OpenAI.
3+
The openHAB ChatGPT Binding allows openHAB to communicate with the ChatGPT language model provided by OpenAI and manage openHAB system via [Function calling](https://platform.openai.com/docs/guides/function-calling).
44

5-
ChatGPT is a powerful natural language processing (NLP) tool that can be used to understand and respond to a wide range of text-based commands and questions.
6-
With this binding, you can use ChatGPT to formulate proper sentences for any kind of information that you would like to output.
5+
ChatGPT is a powerful natural language processing (NLP) tool that can be used to understand and respond to a wide range of text-based commands and questions.
6+
With this binding, users can:
7+
8+
- Control openHAB Devices: Manage lights, climate systems, media players, and more with natural language commands.
9+
- Multi-language Support: Issue commands in almost any language, enhancing accessibility.
10+
- Engage in Conversations: Have casual conversations, ask questions, and receive informative responses.
11+
- Extended Capabilities: Utilize all other functionalities of ChatGPT, from composing creative content to answering complex questions.
12+
13+
This integration significantly enhances user experience, providing seamless control over smart home environments and access to the full range of ChatGPT’s capabilities.
714

815
## Supported Things
916

1017
The binding supports a single thing type `account`, which corresponds to the OpenAI account that is to be used for the integration.
1118

1219
## Thing Configuration
1320

14-
The `account` thing requires a single configuration parameter, which is the API key that allows accessing the account.
21+
The `account` thing requires the API key that allows accessing the account.
1522
API keys can be created and managed under <https://platform.openai.com/account/api-keys>.
1623

17-
| Name | Type | Description | Default | Required | Advanced |
18-
|-----------------|---------|-----------------------------------------------------------|--------------------------------------------|----------|----------|
19-
| apiKey | text | The API key to be used for the requests | N/A | yes | no |
20-
| apiUrl | text | The server API where to reach the AI service | https://api.openai.com/v1/chat/completions | no | yes |
21-
| modelUrl | text | The model url where to retrieve the available models from | https://api.openai.com/v1/models | no | yes |
24+
| Name | Type | Description | Default | Required | Advanced |
25+
|------------------|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------|----------|----------|
26+
| apiKey | text | The API key to be used for the requests | N/A | yes | no |
27+
| temperature | decimal | A value between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. | 0.5 | no | no |
28+
| topP | decimal | A value between 0 and 1. An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. | 1.0 | no | yes |
29+
| apiUrl | text | The server API where to reach the AI service | https://api.openai.com/v1/chat/completions | no | yes |
30+
| modelUrl | text | The model url where to retrieve the available models from | https://api.openai.com/v1/models | no | yes |
31+
| model | text | The model to be used for the HLI service | gpt-4o-mini | no | yes |
32+
| systemMessage | text | Here you need to describe your openHAB system that will help AI control your smart home. | N/A | if HLI | yes |
33+
| maxTokens | decimal | The maximum number of tokens to generate in the completion. | 500 | no | yes |
34+
| keepContext | decimal | How long should the HLI service retain context between requests (in minutes) | 2 | no | yes |
35+
| contextThreshold | decimal | Limit total tokens included in context. | 10000 | no | yes |
36+
| useSemanticModel | boolean | Use the semantic model to determine the location of an item. | true | no | yes |
2237

2338
The advanced parameters `apiUrl` and `modelUrl` can be used, if any other ChatGPT-compatible service is used, e.g. a local installation of [LocalAI](https://github.com/go-skynet/LocalAI).
2439

@@ -33,32 +48,41 @@ It is possible to extend the thing with further channels of type `chat`, so that
3348

3449
Each channel of type `chat` takes the following configuration parameters:
3550

36-
| Name | Type | Description | Default | Required | Advanced |
37-
|-----------------|---------|------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|----------|----------|
38-
| model | text | The model to be used for the responses. | gpt-3.5-turbo | no | no |
39-
| temperature | decimal | A value between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. | 0.5 | no | no |
40-
| systemMessage | text | The system message helps set the behavior of the assistant. | N/A | no | no |
41-
| maxTokens | decimal | The maximum number of tokens to generate in the completion. | 500 | no | yes |
51+
| Name | Type | Description | Default | Required | Advanced |
52+
|---------------|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|----------|----------|
53+
| model | text | The model to be used for the responses. | gpt-4o | yes | no |
54+
| systemMessage | text | The system message helps set the behavior of the assistant. | N/A | yes | no |
55+
| temperature | decimal | A value between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. | 0.5 | no | yes |
56+
| topP | decimal | A value between 0 and 1. An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. | 1.0 | no | yes |
57+
| maxTokens | decimal | The maximum number of tokens to generate in the completion. | 1000 | no | yes |
58+
59+
60+
## Items Configuration
61+
62+
Items to be used by the HLI service must be tagged with the [ "ChatGPT" ] tag.
63+
If no semantic model is set up, you can set the parameter `useSemanticModel` to false.
64+
In this case, the item names must follow the naming convention '<Location>_***', for example "Kitchen_Light". The label of the items are expected to briefly describe the item in more detail.
4265

4366
## Full Example
4467

4568
### Thing Configuration
4669

4770
```java
48-
Thing chatgpt:account:1 [apiKey="<your api key here>"] {
71+
Thing chatgpt:account:1 [
72+
apiKey="",
73+
] {
4974
Channels:
5075
Type chat : chat "Weather Advice" [
51-
model="gpt-3.5-turbo",
76+
model="gpt-4o-mini",
5277
temperature="1.5",
5378
systemMessage="Answer briefly, in 2-3 sentences max. Behave like Eddie Murphy and give an advice for the day based on the following weather data:"
5479
]
5580
Type chat : morningMessage "Morning Message" [
56-
model="gpt-3.5-turbo",
81+
model="gpt-4o-mini",
5782
temperature="0.5",
5883
systemMessage="You are Marvin, a very depressed robot. You wish a good morning and tell the current time."
59-
]
84+
]
6085
}
61-
6286
```
6387

6488
### Item Configuration
@@ -69,8 +93,14 @@ String Morning_Message { channel="chatgpt:account:1:morningMessage" }
6993

7094
Number Temperature_Forecast_Low
7195
Number Temperature_Forecast_High
96+
Dimmer Kitchen_Dimmer "Kitchen main light" [ "ChatGPT" ]
7297
```
7398

99+
### UI Configuration of the HLI Service
100+
101+
To enable the HLI service, go to Settings -> Voice and choose "ChatGPT Human Language Interpreter".
102+
A text-to-speech service must be configured.
103+
74104
### Example Rules
75105

76106
```java
@@ -106,3 +136,4 @@ and
106136
```
107137

108138
The state updates can be used for a text-to-speech output and they will give your announcements at home a personal touch.
139+

bundles/org.openhab.binding.chatgpt/src/main/java/org/openhab/binding/chatgpt/internal/ChatGPTChannelConfiguration.java

+5-3
Original file line numberDiff line numberDiff line change
@@ -22,11 +22,13 @@
2222
@NonNullByDefault
2323
public class ChatGPTChannelConfiguration {
2424

25-
public String model = "gpt-3.5-turbo";
25+
public String model = "gpt-4o-mini";
2626

27-
public float temperature = 0.5f;
27+
public Double temperature = 0.5;
28+
29+
public Double topP = 1.0;
2830

2931
public String systemMessage = "";
3032

31-
int maxTokens = 500;
33+
public int maxTokens = 500;
3234
}

bundles/org.openhab.binding.chatgpt/src/main/java/org/openhab/binding/chatgpt/internal/ChatGPTConfiguration.java

+8
Original file line numberDiff line numberDiff line change
@@ -25,4 +25,12 @@ public class ChatGPTConfiguration {
2525
public String apiKey = "";
2626
public String apiUrl = "https://api.openai.com/v1/chat/completions";
2727
public String modelUrl = "https://api.openai.com/v1/models";
28+
public boolean useSemanticModel = true;
29+
public String model = "gpt-4o-mini";
30+
public Double temperature = 1.0;
31+
public Integer maxTokens = 1000;
32+
public Double topP = 1.0;
33+
public String systemMessage = "";
34+
public Integer keepContext = 2;
35+
public Integer contextThreshold = 10000;
2836
}

0 commit comments

Comments
 (0)