You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been doing extensive testing the method on medical question answering and so far it works great. However the logic behind document processing and retrieval seems complicated just by looking at the code. For example I realized that 52 pages of context being stuffed into the LLM prompt. I wonder how could the LLM (GPT-4o) handle that much context. Secondly the processing time seems to be much faster in my application, where I stuffed in only 10 chunks (equivalent to approximately 4 pages only).
Any diagram to visualize the process of chunking, entities extraction, community formulation, post-processing, retrieval process ... will be much appreciated and believed to be useful for any body.
The text was updated successfully, but these errors were encountered:
Yes I agree, having more documentation (a blog post, diagrams, etc.) that describe or show how the app works, in detail, would be really great! Especially the retrieval process (for each different "chat mode"). (Obviously you can figure it all out by looking through the code, but some documentation/diagrams would make it much easier/quicker to understand)
I've been doing extensive testing the method on medical question answering and so far it works great. However the logic behind document processing and retrieval seems complicated just by looking at the code. For example I realized that 52 pages of context being stuffed into the LLM prompt. I wonder how could the LLM (GPT-4o) handle that much context. Secondly the processing time seems to be much faster in my application, where I stuffed in only 10 chunks (equivalent to approximately 4 pages only).
Any diagram to visualize the process of chunking, entities extraction, community formulation, post-processing, retrieval process ... will be much appreciated and believed to be useful for any body.
The text was updated successfully, but these errors were encountered: