Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chatbot thoughts generating animation #10636

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open

Conversation

dawoodkhan82
Copy link
Collaborator

@dawoodkhan82 dawoodkhan82 commented Feb 19, 2025

Description

Screen.Recording.2025-02-19.at.6.28.18.PM.mov
import gradio as gr
from gradio import ChatMessage
import time

sleep_time = 0.5

def simulate_thinking_chat(message, history):
    start_time = time.time()
    response = ChatMessage(
        content="",
        metadata={"title": "_Thinking_ step-by-step", "id": 0, "status": "pending"}
    )
    yield response

    thoughts = [
        "First, I need to understand the core aspects of the query...",
        "Now, considering the broader context and implications...",
        "Analyzing potential approaches to formulate a comprehensive answer...",
        "Finally, structuring the response for clarity and completeness..."
    ]

    accumulated_thoughts = ""
    for thought in thoughts:
        time.sleep(sleep_time)
        accumulated_thoughts += f"- {thought}\n\n"
        response.content = accumulated_thoughts.strip()
        yield response

    response.metadata["status"] = "done"
    response.metadata["duration"] = time.time() - start_time
    yield response

    time.sleep(5.0)
    response = [
        response,
        ChatMessage(
            content="Based on my thoughts and analysis above, my response is: This dummy repro shows how thoughts of a thinking LLM can be progressively shown before providing its final answer."
        )
    ]
    yield response




demo = gr.ChatInterface(
    simulate_thinking_chat,
    title="Thinking LLM Chat Interface 🤔",
    type="messages",
)

if __name__ == "__main__":
    demo.launch()

Closes: #10623

🎯 PRs Should Target Issues

Before your create a PR, please check to see if there is an existing issue for this change. If not, please create an issue before you create this PR, unless the fix is very small.

Not adhering to this guideline will result in the PR being closed.

Testing and Formatting Your Code

  1. PRs will only be merged if tests pass on CI. We recommend at least running the backend tests locally, please set up your Gradio environment locally and run the backed tests: bash scripts/run_backend_tests.sh

  2. Please run these bash scripts to automatically format your code: bash scripts/format_backend.sh, and (if you made any changes to non-Python files) bash scripts/format_frontend.sh

@gradio-pr-bot
Copy link
Collaborator

gradio-pr-bot commented Feb 19, 2025

🪼 branch checks and previews

Name Status URL
Spaces ready! Spaces preview
Storybook ready! Storybook preview
🦄 Changes detected! Details

Install Gradio from this PR

pip install https://gradio-pypi-previews.s3.amazonaws.com/b89e31aa8d05d2595069dfff1d8f5dce5af25320/gradio-5.16.2-py3-none-any.whl

Install Gradio Python Client from this PR

pip install "gradio-client @ git+https://github.com/gradio-app/gradio@b89e31aa8d05d2595069dfff1d8f5dce5af25320#subdirectory=client/python"

Install Gradio JS Client from this PR

npm install https://gradio-npm-previews.s3.amazonaws.com/b89e31aa8d05d2595069dfff1d8f5dce5af25320/gradio-client-1.12.0.tgz

Use Lite from this PR

<script type="module" src="https://gradio-lite-previews.s3.amazonaws.com/b89e31aa8d05d2595069dfff1d8f5dce5af25320/dist/lite.js""></script>

@gradio-pr-bot
Copy link
Collaborator

gradio-pr-bot commented Feb 19, 2025

🦄 change detected

This Pull Request includes changes to the following packages.

Package Version
@gradio/chatbot minor
gradio minor
  • Maintainers can select this checkbox to manually select packages to update.

With the following changelog entry.

Chatbot thoughts generating animation

Maintainers or the PR author can modify the PR title to modify this entry.

Something isn't right?

  • Maintainers can change the version label to modify the version bump.
  • If the bot has failed to detect any changes, or if this pull request needs to update multiple packages to different versions or requires a more comprehensive changelog entry, maintainers can update the changelog file directly.

@abidlabs
Copy link
Member

abidlabs commented Feb 19, 2025

UI-wise, this looks good to me @dawoodkhan82, but perhaps @yvrjsharma or @aymeric-roucher can test with a real app as well?

@abidlabs abidlabs requested a review from yvrjsharma February 19, 2025 23:34
@yvrjsharma
Copy link
Collaborator

Thanks for looking into this @dawoodkhan82 🙌
I’m having a bit of trouble replicating the effect with a gr.Chatbot(), and I’m not quite sure what I might be missing. Any help figuring this out would be great.

import gradio as gr
import time

sleep_time = 0.5

def user_message(message, history):
    """Add user message to chat history"""
    return "", history + [{"role": "user", "content": message}]

def simulate_thinking_chat(history):
    """Generate bot responses"""
    start_time = time.time()
    
    # Initialize thinking response
    thinking_header = "_Thinking_ step-by-step"
    history.append({"role": "assistant", "content": f"**{thinking_header}**\n\n"})
    yield history

    # Generate thoughts progressively
    thoughts = [
        "First, I need to understand the core aspects of the query...",
        "Now, considering the broader context and implications...",
        "Analyzing potential approaches to formulate a comprehensive answer...",
        "Finally, structuring the response for clarity and completeness..."
    ]

    accumulated_thoughts = ""
    for thought in thoughts:
        time.sleep(sleep_time)
        accumulated_thoughts += f"- {thought}\n\n"
        history[-1]["content"] = f"**{thinking_header}**\n\n{accumulated_thoughts.strip()}"
        yield history

    # Add completion note with duration
    duration = time.time() - start_time
    history[-1]["content"] = f"**{thinking_header}** _(completed in {duration:.2f}s)_\n\n{accumulated_thoughts.strip()}"
    yield history

    # Add final response with a delay to showcase loading animation
    time.sleep(5.0)
    history.append({
        "role": "assistant", 
        "content": "Based on my thoughts and analysis above, my response is: This dummy repro shows how thoughts of a thinking LLM can be progressively shown before providing its final answer."
    })
    yield history

with gr.Blocks(title="Thinking LLM Chat Interface 🤔") as demo:

    chatbot = gr.Chatbot(type="messages")
    msg = gr.Textbox(label="Message", placeholder="Type your message here...")
    clear = gr.ClearButton([msg, chatbot])

    msg.submit(user_message, [msg, chatbot], [msg, chatbot]).then(
        simulate_thinking_chat, chatbot, chatbot)

if __name__ == "__main__":
    demo.launch()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Provide a status indicator to show that a Chatbot has not finished streaming
4 participants