Skip to content

Civic-AurAI/rocketride-server

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

172 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Header

CI License: MIT Node.js 18+ Discord

RocketRide is a high-performance data processing engine built on a C++ core with a Python-extensible node system. With 50+ pipeline nodes, native AI/ML support, and SDKs for TypeScript, Python, and MCP, it lets you process, transform, and analyze data at scale — entirely on your own infrastructure.

Key Capabilities

  • Stay in your IDE — Build, debug, test, and scale heavy AI and data workloads with an intuitive visual builder in the environment you're used to. Stop using your browser.
  • High-performance C++ engine — Native multithreading. No bottleneck. Purpose-built for throughput, not prototypes.
  • Multi-agent workflows — Orchestrate and scale agents with built-in support for CrewAI and LangChain.
  • 50+ pipeline nodes — Python-extensible, with 13 LLM providers, 8 vector databases, OCR, NER, PII anonymization, and more.
  • TypeScript, Python & MCP SDKs — Integrate pipelines into native applications or expose them as tools for AI assistants.
  • One-click deploy — Run on Docker, on-prem, or RocketRide Cloud (👀coming soon). Our architecture is made for production, not demos.

⚡ Quick Start

  1. Install the extension for your IDE. Search for RocketRide in the extension marketplace:

    Install RocketRide extension

    Not seeing your IDE? Open an issue · Download directly

  2. Click the RocketRide (🚀) extension in your IDE

  3. Deploy a server — you'll be prompted on how you want to run the server. Choose the option that fits your setup:

    • Local (Recommended) — This pulls the server directly into your IDE without any additional setup.
    • On-Premises — Run the server on your own hardware for full control and data residency. Pull the image and deploy to Docker or clone this repo and build from source.
    • RocketRide Cloud (👀coming soon) — Managed hosting with our proprietary model server. No infrastructure to maintain.
  4. Create a .pipe file and start building

🔧 Building your first pipe

  1. All pipelines are recognized with the *.pipe format. Each pipeline and configuration is a JSON object - but the extension in your IDE will render within our visual builder canvas.

  2. All pipelines begin with source node: webhook, chat, or dropper. For specific usage, examples, and inspiration 💡 on how to build pipelines, check out our guides and documentation

  3. Connect input lanes and output lanes by type to properly wire your pipeline. Some nodes like agents or LLMs can be invoked as tools for use by a parent node as shown below:

Pipeline canvas example

  1. You can run a pipeline from the canvas by pressing the ▶️ button on the source node or from the Connection Manager directly.

  2. View all available and running pipelines below the Connection Manager. Selecting running pipelines allows for in depth analytics. Trace call trees, token usage, memory consumption, and more to optimize your pipelines before scaling and deploying.

  3. 📦 Deploy your pipelines to RocketRide.ai cloud or run them on your own infrastructure.

    • Docker — Download the RocketRide server image and create a container. Requires Docker to be installed.

      docker pull ghcr.io/rocketride-org/rocketride-engine:latest
      docker create --name rocketride-engine -p 5565:5565 ghcr.io/rocketride-org/rocketride-engine:latest
    • RocketRide Cloud (👀coming soon) — Managed hosting with our proprietary model server and batched processing. The cheapest option to run AI workflows and pipelines at scale (seriously).

  4. Run your pipelines as standalone processes or integrate them into your existing Python and TypeScript/JS applications utilizing our SDK.

  5. Use it, commit it, ship it. 🚚

Useful Links


Made with ❤️ in 🌁 SF & 🇪🇺 EU

About

High-performance AI pipeline engine with a C++ core and 50+ Python-extensible nodes. Build, debug, and scale LLM workflows with 13+ model providers, 8+ vector databases, and agent orchestration, all from your IDE. Includes VS Code extension, TypeScript/Python SDKs, and Docker deployment.

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • C++ 47.6%
  • Python 26.8%
  • TypeScript 18.1%
  • JavaScript 3.1%
  • C 1.5%
  • Java 1.1%
  • Other 1.8%