Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 8 additions & 8 deletions .env
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
VITE_BASE_PATH="/study/"
VITE_FIREBASE_CONFIG='
{
apiKey: "AIzaSyAm9QtUgx1lYPDeE0vKLN-lK17WfUGVkLo",
authDomain: "revisit-utah.firebaseapp.com",
projectId: "revisit-utah",
storageBucket: "revisit-utah.appspot.com",
messagingSenderId: "811568460432",
appId: "1:811568460432:web:995f6b4f1fc8042b5dde15"
apiKey: "AIzaSyDq1njfXz0s7SQICEQCL_VjbSngqD53-0A",
authDomain: "view-revisit.firebaseapp.com",
projectId: "view-revisit",
storageBucket: "view-revisit.firebasestorage.app",
messagingSenderId: "595567496565",
appId: "1:595567496565:web:f4a7ee02bd8a3efb1e573c"
}
'
VITE_STORAGE_ENGINE="localStorage" # "firebase" or "supabase" or "localStorage" or your own custom storage engine
VITE_RECAPTCHAV3TOKEN="6LdjOd0lAAAAAASvFfDZFWgtbzFSS9Y3so8rHJth" # recaptcha SITE KEY
VITE_STORAGE_ENGINE="firebase" # "firebase" or "supabase" or "localStorage" or your own custom storage engine
VITE_RECAPTCHAV3TOKEN="6LfaDFcsAAAAAIeDQp0kGgG8TcGaV6k1yWhes7U9" # recaptcha SITE KEY
VITE_REPO_URL="https://github.com/revisit-studies/study/tree/main/public/" # Set the url for the "view source" link on the front page

VITE_SUPABASE_URL="https://supabase.revisit.dev"
Expand Down
4 changes: 4 additions & 0 deletions public/global.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
{
"$schema": "https://raw.githubusercontent.com/revisit-studies/study/v2.4.1/src/parser/GlobalConfigSchema.json",
"configsList": [
"grasp-explorer-study",
"tutorial",
"tutorial-replication",
"demo-html",
Expand Down Expand Up @@ -55,6 +56,9 @@
"test-step-logic"
],
"configs": {
"grasp-explorer-study": {
"path": "grasp-explorer-study/config.json"
},
"tutorial": {
"path": "tutorial/config.json"
},
Expand Down
16 changes: 16 additions & 0 deletions public/grasp-explorer-study/assets/comparison-intro.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Current Tool vs. Grasp Explorer

The COMPARE ecosystem currently provides a **Google Spreadsheet** at [robot-manipulation.org](https://www.robot-manipulation.org/software/grasp-planning) for browsing grasp planning methods. It contains the same 56 methods with columns for planning approach, hardware, datasets, and more.

The Grasp Explorer was built as an interactive alternative to this spreadsheet, adding clustering, AI-powered search, and paper-level evidence retrieval.

You will now briefly revisit the current spreadsheet. As you browse it, consider:

- Which format helps you **find** a method faster?
- Which format helps you **compare** methods more effectively?
- Which format helps you **discover** methods you did not know about?
- Which format gives you a better **overview** of the field?

After browsing, we will ask you to rate the spreadsheet vs. the Grasp Explorer on each of these tasks, and to describe the strengths of each tool.

Click **Next** to view the current spreadsheet.
28 changes: 28 additions & 0 deletions public/grasp-explorer-study/assets/explore-instructions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Explore the Grasp Explorer

Below is the live Grasp Explorer dashboard. Please spend **5-10 minutes** exploring it.

## Note on Development

- This study has been created to get feedback on a developing idea
- Features are not fully functional or tuned
- For example, the view PDF has stop words or highlights that are unimportant. But the concept is what we are reviewing (**Does it help to see actual papers and highlighted text?**)
- The aim is to understand the most constructive insights for users
- Then polish the implementation and data accuracy

## Suggested activities

1. **Try asking a question** in the search bar at the top. Some example queries:
- *"How do point cloud methods compare to depth image approaches for cluttered bin picking?"*
- *"What neural network architectures are used across grasp planning methods?"*
- *"Which methods use reinforcement learning for dexterous grasping?"*

2. **Read the AI-generated insight**: Does it cite specific papers? Is the information useful?

3. **Check the Paper Evidence section**: Try clicking "View PDF" to see the actual paper (note: highlighting is a work in progress and may not be accurate)

4. **Look at the analytics charts** below the scatter plot: Do they help you understand the results?

5. **Hover over the scatter plot**: Click on individual methods to see their details

When you are done exploring, click **Next** to provide feedback.
36 changes: 36 additions & 0 deletions public/grasp-explorer-study/assets/introduction.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# Grasp Explorer: AI-in-the-Loop Visualization Study

Thank you for participating! We are evaluating an interactive tool built for the **COMPARE ecosystem** that helps researchers explore **56 robotic grasp planning methods** through AI-powered visualization.

## Study Structure

This study has **three sections**, each focused on a different part of the tool:

1. **The Landscape View**: The scatter plot, clustering, weight sliders, and color-by controls
2. **The AI Copilot**: Natural language querying, AI-generated insights, analytics charts, and interactive controls
3. **The Knowledge Base**: Text chunks from 34 research papers, embedding space, and query-driven paper filtering

For each section, you will first read a brief intro (with a preview of the questions), then explore the tool, and then answer feedback questions. You will **not** have access to the tool during feedback, so the preview helps you know what to look for.

## Note on development stage

This is an **early prototype** — we are sharing it now specifically to get expert feedback before investing further engineering time. Some features are not fully functional or polished:

- **View PDF** with in-paper highlighting is **work in progress** and may not function correctly
- The RAG pipeline (how the AI retrieves and interprets paper content) is our primary engineering focus going forward — your feedback on what information matters most will directly guide that work

We are evaluating the **concept and direction**, not the final implementation. Does this approach help? What should it prioritize?

## Why your input matters

As domain experts, you know what the papers actually say, what is missing, and what questions researchers in this field really need answered. Your feedback will directly shape which information this tool surfaces and how it should be organized.

## Recording

We will record your **screen and audio** during the exploration phases so we can understand how you interact with the tool. Please think aloud as you explore: tell us what you notice, what confuses you, and what you find useful.

## Time estimate

Approximately **25-30 minutes**.

Click **Next** to begin.
12 changes: 12 additions & 0 deletions public/grasp-explorer-study/assets/section1-explore.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
"**Section 1: Explore the scatter plot and clustering.**

Do NOT type a query yet. Just look at the default view:

- Hover over points to see method names
- Look at the cluster colors and labels
- **Click a cluster label in the legend** to highlight only that cluster in the scatter plot and table
- Click a method to see its details

**Think aloud**: describe what you see and whether the groupings make sense.

When done, click **Next**.
32 changes: 32 additions & 0 deletions public/grasp-explorer-study/assets/section1-intro.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# Section 1: The Landscape View

When you first open the Grasp Explorer, you see a **scatter plot** where each dot represents one of 56 grasp planning methods. Methods that are similar (based on their planning approach, gripper type, sensor input, and other attributes) appear close together.

The methods are **projected** across all text+categorical features using _UMAP_ and subsequently grouped into **clusters** using _HDBSCAN_, a density-based clustering algorithm. Each cluster is labeled by its dominant characteristics (e.g., "Sampling / Two-finger / Piled").

## Interactive Cluster Filtering

The cluster legend on the left is **clickable**. Click a cluster label to highlight only those methods in the scatter plot and the table below. Click again to clear the filter.

## What to look for

- Do the clusters make sense to you as a domain expert?
- Can you identify which methods are in which group?
- Does the spatial layout (which methods are near or far from each other) match your intuition?
- Is there anything surprising about how methods are grouped?
- Try clicking a cluster in the legend — does filtering the view this way feel natural?

## Heads up: what we will ask after you explore

You will **not** be able to interact with the tool while answering feedback questions, so keep these in mind as you explore:

- Did the clusters make sense? Were any groupings surprising or missing?
- Did the spatial layout match your intuition about which methods are similar?
- Were the cluster labels clear?
- What attributes matter most to you when deciding if two methods are "similar"?

## Think aloud

Please **speak your thoughts out loud** as you explore. We are recording your screen and audio to understand your experience.

Click **Next** to explore the scatter plot.
52 changes: 52 additions & 0 deletions public/grasp-explorer-study/assets/section2-intro.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# Section 2: The AI Copilot & Interactive Controls

The Grasp Explorer has an **AI Copilot** that lets you ask natural language questions about grasp planning methods. When you type a query:

1. The system searches a database of **1,074 text chunks from 34 research papers**
2. Methods are re-ranked by relevance to your question
3. The **attribute weights** automatically adjust to emphasize what matters for your query
4. The clustering and scatter plot update accordingly
5. An LLM generates **insight bullets** grounded in actual paper content

## Interactive controls to try

- **Weight sliders** (in the toolbar): Control how much each attribute influences the scatter plot layout. Higher weight = methods that differ on that attribute are pushed further apart. Try dragging a slider and watch the scatter plot reorganize.
- **Color by** dropdown (top right of the scatter plot): Switch the scatter plot coloring from clusters to any attribute — e.g., color by End-effector to see how gripper types distribute across the landscape.
- **Cluster legend** (right side): Click any cluster or attribute value to filter the scatter plot and table.

## What to try

Ask **2-3 questions** that you would genuinely want answered about grasp planning. Being domain experts, we especially want questions where you already know the answer — this lets us check whether the AI gets it right.

Some starting points:

- *"How do point cloud methods compare to depth image approaches for cluttered bin picking?"*
- *"What neural network architectures are used across grasp planning methods?"*
- *"Which methods handle sim-to-real transfer?"*

After asking a query, **scroll down** to see the analytics section with charts showing query-method similarity, cited references, key topics, and evidence breakdown. These help you understand how the AI arrived at its answer.

## View the source PDFs (work in progress)

After asking a query, look for the **Paper Evidence** section below the insight bullets. Each paper listed has a **"View PDF"** button on the right side. This feature is **still being developed** — highlighting may show irrelevant terms or not work at all. We are sharing it early to get your feedback on the concept: **does having direct access to the cited paper passage help you trust or verify the AI's claims?**

## Papers in the knowledge base

The AI can cite content from these 34 papers: 3DAPNet, AnyGrasp, CaTGrasp, Contact-GraspNet, DexDiffuser, DexGrasp Anything, Edge Grasp Network, Equivariant Volumetric Grasping, FoundationGrasp, GCNGrasp, GeomPickPlace, GA-DDPG, GIGA, GPD, GraspGen, GraspGPT, GraspMolmo, GraspQP, GraspSAM, GraspVLA, GraspXL, Multi-FinGAN, NeuGraspNet, OrbitGrasp, PointNetGPD, REGNet, RGBD-Grasp, Robust Grasp Planning, RobustDexGrasp, ShapeGrasp, S4G, UniGraspTransformer, VGN, ZeroGrasp.

## Heads up: what we will ask after you explore

You will answer some questions with the tool still visible, and some without. Keep these in mind:

- Were the AI insights relevant, accurate, and novel?
- Did you notice anything incorrect or fabricated?
- Did paper citations help you trust the claims?
- Did the weight sliders and color-by controls help you explore?
- Were the analytics charts (similarity scores, evidence breakdown) useful?
- What guardrails would you expect from an AI research tool?

## Think aloud

Please **speak your thoughts** as you explore. Tell us what you find useful, what confuses you, and whether the tool helps you gain insights you wouldn't get from reading the papers individually.

Click **Next** to try the AI Copilot.
31 changes: 31 additions & 0 deletions public/grasp-explorer-study/assets/section3-intro.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# Section 3: The Analytics Dashboard

After you ask a query, the bottom of the page shows an **analytics dashboard** with several visualizations:

- **How This View Was Built**: A step-by-step explanation of what the system did to answer your query
- **Query-Method Similarity**: Bar chart showing how closely each method matches your question
- **Cited References**: Papers cited within the retrieved evidence passages
- **Papers Referenced**: Which source papers contributed evidence to the answer
- **Key Topics**: Technical terms found in the retrieved passages
- **Evidence Breakdown**: What type of content was retrieved (theory vs. implementation vs. evaluation)

## What to look for

- Do these charts help you understand the AI's answer?
- Is any chart confusing or unhelpful?
- Would you want to see different visualizations?
- Does the "How This View Was Built" section help you trust the results?

## Heads up: what we will ask after you explore

You will **not** be able to interact with the tool while answering, so pay attention to:

- Which chart was most useful? Which was least useful or confusing?
- Did "How This View Was Built" help you trust the results?
- Are there visualizations you wish were included?

## Think aloud

Please **describe what you see** in each chart and whether it adds value.

Click **Next** to review the analytics dashboard.
35 changes: 35 additions & 0 deletions public/grasp-explorer-study/assets/section4-intro.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Section 3: The Knowledge Base

The **Knowledge Base** page shows the 34 research papers broken into searchable text passages called **chunks**. Each chunk is a meaningful excerpt from a paper, tagged with:

- **Content type**: *How it works* (theory / algorithm), *Implementation* (code / system details), or *Results* (benchmarks / evaluation)
- **Extraction level**: *Overview* (abstract / introduction), *Section* (a specific section), or *Detail* (a fine-grained passage)

## The embedding space

At the top of the page is a scatter plot where each dot is a **text chunk** (not a method). Chunks that cover similar topics appear close together across all 34 papers. Clicking a paper chip on the right highlights its chunks in the scatter plot.

## Query-driven filtering

If you already typed a query in the AI Copilot, the Knowledge Base page will **highlight the papers whose chunks were used as evidence**. The paper chips are sorted so relevant papers appear first, and the embedding scatter dims non-relevant papers.

## What to explore

1. Click a few **content type or extraction level labels** in the legend — they filter the chunk list and scatter plot so you see only that type of evidence
2. Click a **paper chip** to see all its chunks
3. Browse the text passages — these are the raw excerpts the AI reads when answering your questions

## Heads up: what we will ask after you explore

You will **not** be able to interact with the tool while answering, so keep these in mind:

- Were the text passages readable and meaningful?
- Did the content type labels (How it works / Implementation / Results) match what was in the passages?
- If your paper is in the knowledge base, did the chunks capture its key contributions?
- If you could search all 34 papers for one thing, what would you ask?

## Think aloud

Please tell us whether the chunks feel useful and whether they capture the right information from the papers.

Click **Next** to explore the Knowledge Base.
46 changes: 46 additions & 0 deletions public/grasp-explorer-study/assets/tool-overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# How the Grasp Explorer Works

The Grasp Explorer is a dashboard that visualizes **56 robotic grasp planning methods** and lets you ask natural language questions about them.

## What happens when you ask a question

1. Your question is converted into a numerical vector and compared against all 56 method descriptions to find the most relevant ones
2. A vector database of **1,074 text chunks from 34 research papers** is searched for passages relevant to your question
3. The visualization is re-clustered using HDBSCAN (a density-based clustering algorithm) with column weights adjusted based on your query
4. An LLM generates insights grounded in the actual paper content and clustering results

## What you see on the dashboard

- **Scatter plot**: Each dot is a grasp planning method, positioned by similarity (nearby dots = similar methods)
- **Cluster colors**: Methods are grouped into natural clusters based on shared attributes
- **AI Copilot Insight**: Bullet-point analysis citing specific papers
- **Paper Evidence**: Links to the source papers with a built-in PDF viewer
- **Analytics charts**: Method relevance scores, topic distributions, and evidence breakdowns

## Papers available in the knowledge base

The tool has indexed the following **34 papers**. Your questions will be most effective when they relate to topics covered by these methods:

| # | Method | # | Method |
|---|--------|---|--------|
| 1 | 3DAPNet | 18 | GraspQP |
| 2 | AnyGrasp | 19 | GraspSAM |
| 3 | CaTGrasp | 20 | GraspVLA |
| 4 | Contact-GraspNet | 21 | GraspXL |
| 5 | DexDiffuser | 22 | Multi-FinGAN |
| 6 | DexGrasp Anything | 23 | NeuGraspNet |
| 7 | Edge Grasp Network | 24 | OrbitGrasp (EquiFormerV2) |
| 8 | Equivariant Volumetric Grasping | 25 | PointNetGPD |
| 9 | FoundationGrasp | 26 | REGNet |
| 10 | GCNGrasp | 27 | RGBD-Grasp |
| 11 | GeomPickPlace | 28 | Robust Grasp Planning Over Uncertain Shape Completions |
| 12 | GA-DDPG | 29 | RobustDexGrasp |
| 13 | GIGA | 30 | ShapeGrasp |
| 14 | Grasp Pose Detection (GPD) | 31 | S4G |
| 15 | GraspGen | 32 | UniGraspTransformer |
| 16 | GraspGPT | 33 | VGN |
| 17 | GraspMolmo | 34 | ZeroGrasp |

The dataset also includes **22 additional methods** without indexed papers (e.g., Dex-Net series, GGCNN, UniGrasp). These appear in the visualization but the AI cannot cite their paper content.

Click **Next** to start exploring the tool.
Loading