Skip to content

JetXu-LLM/JetXu-LLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

13 Commits
ย 
ย 

Repository files navigation

Hi, I'm Jet Xu ๐Ÿ‘‹

LinkedIn Blog Email

"The future of AI coding isn't just larger context windowsโ€”it's smarter context retrieval."

I am an architect solving the "Context Precision" problem in AI software engineering.

While others focus on stuffing more code into an LLM, my focus is on Repository Graph RAGโ€”building the "GPS" for codebases. My goal is to enable AI to navigate complex, cross-module dependencies and understand architectural impact with surgical precision and minimal token usage.


1. The Proof of Concept: LlamaPReview

High-Precision Code Review via Contextual Retrieval

I built LlamaPReview to prove that less is more: by retrieving only the relevant dependency graph, we can outperform massive context windows.

Active Repos Combined Stars

  • The Metric: Achieved a 61% Signal-to-Noise Ratio (3x industry average) by filtering out irrelevant code noise.
  • The Evidence: Caught a critical transaction bug in Vanna.ai (20K stars) that required tracing logic across multiple hidden modulesโ€”something standard "diff-based" AI missed entirely.
  • The Product: A validated SaaS solution trusted by 4,000+ repositories.

๐Ÿ‘‰ Visit Product Site | ๐Ÿ“Š Read the Signal-to-Noise Analysis


2. The Foundation: llama-github

The Retrieval Infrastructure Layer

To build a graph, you first need high-fidelity data. I open-sourced the retrieval engine that powers my experiments.

PyPI

  • Role: A production-grade library designed to fetch and structure GitHub data specifically for RAG pipelines.
  • Capability: Bridges the gap between raw Git objects and AI-ready context.
from llama_github import GithubRAG
# Efficiently retrieve cross-module context without cloning the entire repo
context = github_rag.retrieve_context("How does the payment service impact the user schema?")

๐Ÿ‘‰ View on GitHub


3. The Endgame: Code Mesh

The Deterministic Context Layer

LlamaPReview was the experiment. Code Mesh is the infrastructure.

Current LLMs treat code like Text (a sequence of tokens). Compilers treat code like a Mesh (a graph of dependencies). Code Mesh bridges this gap, giving LLMs the "X-Ray Vision" of a compiler.

  • The Architecture: A static analysis engine that builds a persistent dependency graph of the entire repository.
  • The Capability:
    • Surgical Precision: Retrieve exactly the functions needed to understand a change, zero noise.
    • Impact Analysis: Instantly identify how a change in Module A breaks Module Z without reading the files in between.
    • Cost Efficiency: Reduces token usage by 90% compared to full-file context stuffing.

๐Ÿ“š Strategic Insights

I document my research on defining the next generation of AI architecture.

  • Case Study: Catching the "Invisible" Bug โ€” Real-world evidence: How we found a critical logic error in a 20k-star repo that standard "Diff-based" AI missed entirely.
  • The Signal-to-Noise Ratio in AI Code Review โ€” A new evaluation framework: Why simply increasing context window size often leads to lower quality reviews.
  • (Coming Soon) The Inconsistency Problem โ€” Why the same AI tool works perfectly on Monday but fails on Tuesday: A deep dive into "Context Instability."
  • (Coming Soon) The End of Guesswork: Code Mesh โ€” Moving beyond probabilistic search to deterministic, graph-based dependency analysis for 100% consistent context.

๐Ÿ’ป Tech Stack

| Core Intelligence | Python LangChain Hugging Face | | Graph & Data | ArangoDB Neo4j AWS |


๐Ÿ“ซ Let's Connect

I am building the infrastructure that will power the next decade of AI development tools.


Building the GPS for the world's code.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published