Guiding the Random: On Variations of the RRT Algorithm

Some environments are too complex for careful search.

They twist, clutter, and shift. The spaces are high-dimensional, the paths nonlinear.

There is no map—only a question: Is there a way through?


This is where Rapidly-Exploring Random Trees (RRT) step in.

They don’t build plans with precision. They search with purpose.

By expanding trees toward random samples, they grow into the unknown, feeling their way across possibility.


But the original RRT was only the beginning.


Over time, researchers shaped its randomness into refined branches, crafting variations of RRT that balance exploration with efficiency, and randomness with control.


Let’s walk through the most compelling ones:


1. RRT* — From Feasibility to Optimality

The original RRT finds a path, but not always a good one. RRT* improves on this by rewiring the tree as it grows—searching not just for connection, but for better connections.

Over time, the path converges to the optimal one.

It’s still random, but now self-improving.


2. RRT-Connect — When Speed Matters

Designed for fast planning in cluttered spaces, RRT-Connect grows two trees: one from the start, one from the goal. When they meet, the solution is found.

It’s fast, aggressive, and great for real-time applications where delay is not an option.


3. Informed RRT* — Targeted Exploration

Why sample the whole space when only part of it leads to improvement?

Informed RRT* focuses growth within an ellipsoidal region defined by the current best path.

It turns randomness into intent, making convergence faster and more efficient.


4. Anytime RRT — First a Fast Path, Then a Better One

Sometimes, you need a path now—and a better one later.

Anytime RRT quickly produces a feasible path, then continues refining it as more computation becomes available.

It’s ideal for systems where time is limited, but improvement is always welcome.


5. Kinodynamic RRT — Respecting Physics

Real systems can’t turn on a dime. Kinodynamic RRT accounts for dynamic constraints—acceleration, momentum, and actuator limits.

It grows not just through space, but through state and time, producing paths that a real system can actually follow.


6. RRT with Learning — Guided by Experience

Recent approaches integrate machine learning to guide tree growth. Instead of sampling blindly, the algorithm learns where good paths tend to lie—and grows trees with intuition.


Together, these variations turn RRT from a clever trick into a family of planning strategies—each tuned to different needs, environments, and systems.


What unites them is this:

They explore not with certainty, but with courage.

They don’t demand full knowledge.

They trust in structured randomness, in incremental progress, in the belief that even in complex spaces, a solution can be grown—one branch at a time.


Because when the world is unknown,

the best path forward may not be the one you calculate—

but the one you grow into.