You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ClassroomLM's main value comes from its core application framework. Retrieval Augmented Generation (RAG) makes AI assistants more accurate, grounded, and specific by basing their answers on a knowledge base. ClassroomLM is unique in its mechanism of having a siloed knowledge base per classroom so that RAG can be conducted separately on specific classroom contexts. The additional features (like the collaborative chat, auto-generating review materials, etc.) are layered on top of this core mechanism.
> ClassroomLM can enhance learning, access to information, ease of use of AI systems across all kinds and levels of educational institutions. While geared towards classrooms, once ClassroomLM is in place at an institution it can also help subgroups conducting research, administrators with internal documentation, and even for other adjacent organizations, like clubs and student associations that want to easily give access to an assistant specific to their documents and resources.
***Each classroom has access to an LLM assistant that is RAG-enabled, allowing it to be more specific and accurate, while also being more grounded and capable of retrieving information from the class' resources, unlocking greater potential for engaging learning, peer interaction, and more.***
29
+
**_Each classroom has access to an LLM assistant that is RAG-enabled, allowing it to be more specific and accurate, while also being more grounded and capable of retrieving information from the class' resources, unlocking greater potential for engaging learning, peer interaction, and more._**
**More accurate, specific, and grounded**: ClassroomLM's LLM assistant
36
36
provides responses with full awareness and knowledge of the classroom's specific or niche context, rather than operating in the default context of LLMs: the entire world/internet's knowledge.
37
37
38
-
> **Use case example**: An NYU Professor has a variation of assembly created specifically for the classroom, called E20. Putting the E20 manual into the shared classroom dataset gave all students within this classroom access to **an assistant that is now specialized, knowledgeable, and with full context of this niche, not-seen-before language personally created by a professor.**\
39
-
> Compared to ClassroomLM, other user-facing assistant systems gave vague, nonspecific, and non-accurate answers relevant to other assembly variants.
38
+
> **Use case example**: An NYU Professor has a variation of assembly created specifically for the classroom, called E20. Putting the E20 manual into the shared classroom dataset gave all students within this classroom access to **an assistant that is now specialized, knowledgeable, and with full context of this niche, not-seen-before language personally created by a professor.**\
39
+
> Compared to ClassroomLM, other user-facing assistant systems gave vague, nonspecific, and non-accurate answers relevant to other assembly variants.
40
40
41
41
---
42
42
@@ -55,35 +55,39 @@ Rather than an entire classroom's worth of students having to upload their docum
55
55
- And again, **in comparison to existing user-facing systems, all of these will be more accurate and specific because of the grounding that comes from the classroom's resource dataset.**
56
56
57
57
---
58
+
58
59
**Tested in diverse contexts**:
59
60
In terms of contexts, ClassroomLM was tested to be useful for subjects ranging from physics, different math topics, computer science, different topics within the humanities, etc. As an example, for something like philosophy, a class with many texts, ClassroomLM shines because it's able to synthesize across the many readings, and without each student having to reupload all documents.
60
61
61
62
### **Collaborative Chats with ClassroomLM**
62
63
63
-
***Group chats with other class members and an AI assistant that's a full conversation participant, not just a bot responding to one-off Q&As***
64
+
**_Group chats with other class members and an AI assistant that's a full conversation participant, not just a bot responding to one-off Q&As_**
64
65
65
66
- Students can create multiple chatrooms per classroom and choose who to invite within each chatroom.
66
67
- Within the chatroom, students can pull the LLM into the conversation in the manner of a group chat with the **`/ask [message]`** command.
67
68
- The assistant in this case retains all the benefits described above for the personal chat, as it is also RAG enabled.
68
69
69
-
#### Unique to ClassroomLM: Collaborative chat with full conversation context *and* grounded with RAG on a classroom's resources
70
+
#### Unique to ClassroomLM: Collaborative chat with full conversation context _and_ grounded with RAG on a classroom's resources
70
71
71
72
- With ClassroomLM, when triggered with the `/ask` command the LLM will have knowledge of the previous conversation and respond accordingly.
72
-
- Will make corrections to messages even if other discussion occurred in the meantime and otherwise **act like a full participant in the conversation, rather than just a bot that you Q&A one-off messages.**
73
+
- Will make corrections to messages even if other discussion occurred in the meantime and otherwise **act like a full participant in the conversation, rather than just a bot that you Q&A one-off messages.**
73
74
- This is **unlike the common implementations of a "group chat with an AI assistant" idea very often found in company Slacks, etc.** where the LLM is only aware of the message that triggered it and responds just to that.
74
75
- The only benefit of those implementations, compared to just personally asking an LLM, is that everyone in the chat witnesses the Q&A. **ClassroomLM is much more powerful than this simplistic approach**.
#### Collaborative chat, advanced interactivity example
88
+
86
89
<!-- markdownlint-disable MD033 -->
90
+
87
91
- Here, we see the ClassroomLM assistant behaving as an actual conversation participant—in this example, <ins>**it successfully understands that it needs to keep giving new questions one-by-one within a group review session and waiting till the end to evaluate**</ins>.
88
92
- We also see that the **questions are rooted in the knowledge base**, and that the **evaluation correctly and faithfully sticks to the resources** to provide additional relevant context and give feedback.
89
93

@@ -97,7 +101,7 @@ This is especially true in terms of handling bugs and having a comprehensive and
97
101
98
102
### RAGFlow vs. ClassroomLM's responsibilities
99
103
100
-
As seen above in the diagram, the **RAGFlow** instance (note that it's self-hosted) is responsible for storing the documents within knowledge bases and handling RAG functionality during LLM chats. **ClassroomLM is responsible for the layer above this in terms of managing classrooms, collaborative chats, etc**. For example, the ClassroomLM application is what links the siloed datasets within RAGFlow to the corresponding classroom for all LLM assistant functionality.
104
+
As seen above in the diagram, the **RAGFlow** instance (note that it's self-hosted) is responsible for storing the documents within knowledge bases and handling RAG functionality during LLM chats. **ClassroomLM is responsible for the layer above this in terms of managing classrooms, collaborative chats, etc**. For example, the ClassroomLM application is what links the siloed datasets within RAGFlow to the corresponding classroom for all LLM assistant functionality.
101
105
102
106
## Usage
103
107
@@ -108,7 +112,7 @@ For both development and deployment, the **instructions below need to be followe
108
112
Follow [the instructions on the Ragflow docs](https://ragflow.io/docs/dev/) to **deploy and configure** it. This includes choosing the LLM to use, with many supported options to choose from.\
109
113
Note the deployment method they detail in the docs are with Docker Compose. Alternatively, they also have a [helm chart](https://github.com/infiniflow/ragflow/tree/main/helm) to deploy RagFlow onto a Kubernetes cluster.
110
114
111
-
> Note: since we're deploying our web app onto port 8080 as per our [Dockerfile](https://github.com/TechAtNYU/dev-team-spring-25/blob/main/Dockerfile), depending on whether or not your RagFlow engine is deployed on the same machine/network as the ClassroomLM application, you might need to change the port for RagFlow's web interface.
115
+
> Note: since we're deploying our web app onto port 8080 as per our [Dockerfile](https://github.com/TechAtNYU/dev-team-spring-25/blob/main/Dockerfile), depending on whether or not your RagFlow engine is deployed on the same machine/network as the ClassroomLM application, you might need to change the port for RagFlow's web interface.
112
116
> Follow the instructions [here to update the HTTP and HTTPS port](<https://ragflow.io/docs/dev/configurations#:~:text=To%20update%20the%20default%20HTTP%20serving%20port%20(80)%2C>) away from 80 and 443 if you would not like RagFlow's web interface to occupy them.
113
117
114
118
#### Create a RagFlow API Key
@@ -134,7 +138,7 @@ cd dev-team-spring-25
134
138
135
139
First, [install the Supabase CLI](https://supabase.com/docs/guides/local-development/cli/getting-started). If you already have the `npm` dependencies installed from the development setup, then you should already have it.
136
140
137
-
Next, get the *`CONNECTION_STRING`*. You can either use the dashboard and press **Connect** on the top left. Or see the `Accessing Postgres`[section of the self-hosted Supabase docs.](https://supabase.com/docs/guides/self-hosting/docker#accessing-postgres)
141
+
Next, get the _`CONNECTION_STRING`_. You can either use the dashboard and press **Connect** on the top left. Or see the `Accessing Postgres`[section of the self-hosted Supabase docs.](https://supabase.com/docs/guides/self-hosting/docker#accessing-postgres)
138
142
139
143
If you don't already have it, [get the Postgres CLI.](https://www.postgresql.org/download/)
140
144
And finally, run the following command to automatically set up the tables, functions, trigger, realtime functionality, etc. Replace `[CONNECTION_STRING]` with what you determined above.
@@ -155,9 +159,9 @@ When a user signs in with Google, it will only work if their Google account's ad
155
159
156
160
Add a domain (or multiple) in the following manner to `Allowed_Domains`:
157
161
158
-
| id | domain |
159
-
| -----| ------- |
160
-
| 1 | nyu.edu |
162
+
| id | domain |
163
+
| --- | ------- |
164
+
| 1 | nyu.edu |
161
165
162
166
**Note**: In the section below, you'll see that you need to add the allowed domains to the `.env` file as well.
163
167
@@ -167,16 +171,16 @@ Create a `.env` file in the root of the repository based on [`.env.example`](htt
| NEXT_PUBLIC_SUPABASE_URL | Use either the given Supabase URL from the hosted version or a custom URL from your self-hosted solution |
177
+
| NEXT_PUBLIC_SUPABASE_ANON_KEY | Available in Supabase's dashboard or CLI|
178
+
| NEXT_PUBLIC_SITE_URL | The root URL for the site, to be used for callbacks after authentication |
175
179
| NEXT_PUBLIC_ALLOWED_EMAIL_DOMAINS | When users login with Google, these are the only email domains allowed to sign up. **Note that this is also needs to be [configured in the `Allowed_Domains` table within Supabase](#add-allowed-domains-to-database)**|
176
-
| NEXT_PUBLIC_ORGANIZATION_NAME | The name of the organization for display purposes |
177
-
| SUPABASE_SERVICE_ROLE_KEY |Available in Supabase's dashboard or CLI |
178
-
| RAGFLOW_API_KEY |[See above](#create-a-ragflow-api-key) on how to make this key |
179
-
| RAGFLOW_API_URL | Publicly available hostname to access RagFlow's API |
180
+
| NEXT_PUBLIC_ORGANIZATION_NAME | The name of the organization for display purposes |
181
+
| SUPABASE_SERVICE_ROLE_KEY | Available in Supabase's dashboard or CLI|
182
+
| RAGFLOW_API_KEY |[See above](#create-a-ragflow-api-key) on how to make this key |
183
+
| RAGFLOW_API_URL | Publicly available hostname to access RagFlow's API |
180
184
181
185
### Deployment
182
186
@@ -201,25 +205,25 @@ Change the **container image** within `k8s/deployment.yaml` to match the image t
201
205
202
206
Then deploy:
203
207
204
-
```bash
205
-
kubectl apply -f k8s
206
-
```
208
+
```bash
209
+
kubectl apply -f k8s
210
+
```
207
211
208
212
### Development
209
213
210
214
1. Install dependencies:\
211
215
Assuming NPM is installed, we [recommend installing `pnpm`](https://pnpm.io/installation).\
212
216
Then, run the following in the root directory:
213
217
214
-
```bash
215
-
pnpm install
216
-
```
218
+
```bash
219
+
pnpm install
220
+
```
217
221
218
222
2. Start the development server:
219
223
220
-
```bash
221
-
pnpm dev
222
-
```
224
+
```bash
225
+
pnpm dev
226
+
```
223
227
224
228
The application will be available at [http://localhost:8080](http://localhost:8080)
0 commit comments