Rerun.io for Robotics Telemetry Visualization: A Hands-On Tutorial
Over the past five years, robotics engineers have defaulted to RViz for live visualization and Foxglove Studio for cloud dashboards. Both excel at their niche. But RViz locks you into ROS and live-only workflows—replaying a failed mission offline is cumbersome. Foxglove handles multi-robot cloud ops beautifully but carries the weight of a full enterprise platform. Rerun.io carved out a third path: a lightweight, open-source SDK-first visualizer optimized for offline replay, time-aware logging, and multi-modal sensor fusion. By 2026, it has become the default choice for robotics teams doing development, testing, and ML training-set curation. In this tutorial, we’ll build a complete telemetry pipeline from a robot’s on-board sensors through recording, replay, and analysis—showing exactly how Rerun.io fits into your robotics stack. What this post covers:
- Why Rerun.io won mindshare and how it differs from RViz and Foxglove
- Core architecture: the SDK, recording stream, and GPU-accelerated viewer
- Step-by-step Python logging of Lidar, camera, transforms, and scalars
- Connecting a live ROS 2 robot via the Rerun bridge
- Replaying missions and exporting datasets
- Practical recommendations for tool selection
- FAQ and further resources
What Rerun Is and Why It Won Mindshare
Rerun.io is an MIT and Apache 2.0 licensed open-source project (GitHub: rerun-io/rerun) built in Rust with native SDKs in Python, Rust, and C++. At its core, it is a columnar in-memory database and visualization engine purpose-built for sensor and robotics data.
Unlike RViz, which is tightly coupled to ROS middleware, Rerun decouples logging from consumption. You push data through an SDK, the data flows into a timeline-aware recording stream, and a separate GPU-accelerated viewer renders it. This separation is powerful: you can log offline, replay anytime, merge data from multiple sources, and query by time without re-running the robot.
Unlike Foxglove, which is a cloud-first, multi-role platform (operator dashboards, ingestion, alerts, integrations), Rerun stays laser-focused on one problem: making it fast and intuitive to explore and understand multimodal sensor data over time. No paid tiers, no cloud dependency, no auth systems.
Rerun won mindshare because:
- Time-aware by design. Every log entry carries a timestamp on one or more timelines (e.g., ROS time, wall-clock, frame number). The viewer scrubs through time; entities update as time changes. This is native to the architecture, not bolted on.
- Multi-language SDKs. Log from Python during development, Rust during deployment, C++ in a legacy system—all to the same .rrd recording file and the same viewer.
- Offline-first. Record locally, replay anywhere. No cloud account, no API keys, no latency surprises.
- Columnar storage. Data is stored as Arrow; queries are fast even over millions of timesteps.
- Minimal overhead. The SDK is lightweight; logging a Lidar frame with 100k points takes microseconds.
By 2026, Rerun is still pre-1.0 (typically around 0.2x), so API churn is possible. But the core API has stabilized, and adoption in research robotics and autonomous-vehicle teams is solid.
Architecture: SDK → Recording Stream → Viewer
Rerun’s architecture is elegantly simple:
The Recording: A Rerun recording is a timestamped collection of entities and components. An entity is a hierarchical path like /world/robot/lidar or /state/battery_voltage. A component is a typed blob of data: a 3D point cloud (Points3D), an RGB image (Image), a 3×4 transform matrix (Transform3D), a scalar time-series value (Scalar), or a text annotation.
The SDK: When you call rr.log("/world/robot/lidar", rr.Points3D(points, colors=colors)), the SDK serializes that data to Arrow format and pushes it into the recording stream. The stream is abstracted: it can be in-process memory, a local file (.rrd), or a network connection (TCP or gRPC) to a remote viewer.
The Viewer: A separate Rust process with a GPU-accelerated frontend (wgpu). It subscribes to the recording stream, holds the columnar data in VRAM, and renders 3D scenes, 2D images, time-series plots, and entity hierarchies in real time. The viewer is stateless relative to the SDK: it doesn’t dictate what you log, only how to display it.
Transport modes:
– Local (in-process): rr.spawn() launches a viewer subprocess; the SDK writes directly to shared memory.
– TCP: rr.connect("127.0.0.1:9876") connects the SDK to a viewer listening on a port.
– File: rr.save("mission.rrd") records to disk without a viewer; load later with rerun mission.rrd.
– Web: Embed a JavaScript viewer in a web app for cloud sharing.

Tutorial: Logging from a Robot in Python
Let’s build a real telemetry pipeline. Assume we have a mobile robot with a Lidar, front camera, IMU, and battery monitor.
Installation and Initialization
pip install rerun-sdk
import rerun as rr
import numpy as np
from datetime import datetime
# Initialize a recording named "my_robot_mission"
rr.init("my_robot_mission")
# Option 1: Show viewer in a subprocess
rr.spawn()
# Option 2: Or connect to a remote viewer
# rr.connect("127.0.0.1:9876")
# Option 3: Or save to disk
# rr.save("mission.rrd")
Logging Sensor Data
Lidar point cloud:
def log_lidar(timestamp_ros: float, points: np.ndarray, intensities: np.ndarray):
"""
Args:
points: (N, 3) array of (x, y, z) in meters
intensities: (N,) array of [0, 1]
"""
rr.set_time_seconds("ros_time", timestamp_ros)
# Normalize intensities to RGB for visualization
rgb = np.stack([intensities, intensities, intensities], axis=1) * 255
rgb = rgb.astype(np.uint8)
rr.log(
"/world/robot/lidar",
rr.Points3D(points, colors=rgb)
)
Camera image:
def log_camera(timestamp_ros: float, image_rgb: np.ndarray):
"""
Args:
image_rgb: (H, W, 3) uint8 array
"""
rr.set_time_seconds("ros_time", timestamp_ros)
rr.log("/world/robot/camera", rr.Image(image_rgb))
Transform (robot pose):
def log_robot_pose(timestamp_ros: float, position: np.ndarray, quaternion: np.ndarray):
"""
Args:
position: (3,) array [x, y, z]
quaternion: (4,) array [x, y, z, w]
"""
rr.set_time_seconds("ros_time", timestamp_ros)
rr.log(
"/world/robot",
rr.Transform3D(translation=position, quaternion=quaternion)
)
Scalar time-series (battery voltage):
def log_battery(timestamp_ros: float, voltage: float):
rr.set_time_seconds("ros_time", timestamp_ros)
rr.log("/state/battery_voltage", rr.Scalar(voltage))
Integration Loop
for msg in robot_sensor_stream:
timestamp = msg.header.stamp.sec + msg.header.stamp.nsec / 1e9
if msg.type == "lidar":
log_lidar(timestamp, msg.points, msg.intensities)
elif msg.type == "camera":
log_camera(timestamp, msg.rgb)
elif msg.type == "pose":
log_robot_pose(timestamp, msg.position, msg.quaternion)
elif msg.type == "battery":
log_battery(timestamp, msg.voltage)
As data is logged, the viewer updates in real time. You can pause, scrub the timeline, toggle entities on and off, and zoom into 3D scenes. The columnar storage means even a 2-hour mission with millions of sensor readings loads in seconds.

ROS 2 Bridge: Stream Live Telemetry
If you’re running on ROS 2, the Rerun ecosystem provides bridge packages (e.g., rerun-loader-urdf, rerun_robotics) that automate topic mapping.
Install the bridge:
pip install rerun-sdk rerun-loader-urdf
sudo apt install ros-<distro>-rerun-ros2-bridge # Pre-built binary
Configure topic mapping (YAML):
# rerun_config.yaml
tf_frame_prefix: "/world"
topic_mappings:
/tf:
entity_path_template: "/world/{frame_id}"
component: Transform3D
/scan:
entity_path_template: "/world/sensors/lidar"
component: Points3D
conversion: "laser_scan_to_points3d"
/camera/image_raw:
entity_path_template: "/world/sensors/camera"
component: Image
/joint_states:
entity_path_template: "/robot/joint/{joint_name}"
component: Scalar
conversion: "joint_angle_to_scalar"
Launch the bridge:
ros2 launch rerun_ros2_bridge bridge.launch.py config_file:=rerun_config.yaml
The bridge subscribes to all mapped topics, converts ROS messages to Rerun components, and logs them with ROS timestamps. The viewer sees a live 3D world, sensor feeds, and joint angles updated in real time. Optionally, save the recording to disk with --output mission.rrd for offline replay.

Replaying and Querying Recordings (.rrd Files)
Once a mission is complete and saved to .rrd, engineers and analysts can explore it without the robot.
Open a saved recording:
rerun mission.rrd
The viewer loads the .rrd file (which is memory-mapped for efficiency) and displays the same interactive environment. The time scrubber at the bottom lets you navigate the entire mission timeline. Hovering over an entity in the tree shows its history; clicking an entity shows its properties at the current time.
Query and export:
Within the viewer, you can:
– Filter entities: Show/hide by path; e.g., hide camera to focus on Lidar.
– Change views: Toggle between 3D, 2D (image focus), and plot panels (for time-series data).
– Mark timestamps: Pinpoint interesting events (collision, anomaly, state change).
– Export segments: Select a time range and export as images, video, or a new .rrd subset.
This is invaluable for post-mission analysis, debugging, and ML dataset curation. If a perception model failed on a specific scene, you can:
1. Find the timestamp in the full 2-hour recording.
2. Export the Lidar + camera frames from 5 seconds before and after.
3. Use those frames to retrain or debug the model.

Trade-offs: Rerun vs RViz vs Foxglove
Each tool has a place. Here’s when to use each:
RViz (ROS built-in visualization)
– Strengths: Native ROS 1 / ROS 2 integration; familiar to ROS developers; live-only, zero latency.
– Weaknesses: No replay; no time scrubber; tightly coupled to ROS middleware; hard to share recorded sessions.
– Use when: You’re doing live debugging on a single robot during a development sprint.
Foxglove Studio (Cloud-native, paid tiers)
– Strengths: Multi-robot dashboards; cloud ingestion and storage; operator panels (buttons, sliders, text input); enterprise support.
– Weaknesses: Paid beyond free tier; cloud-first (latency, privacy); heavy memory footprint; not offline-first.
– Use when: Running a fleet of deployed robots; need real-time operator control; compliance or audit trails required.
Rerun.io (Offline-first, lightweight)
– Strengths: Time-aware replay; multi-language SDKs; offline-first; columnar storage (fast queries); minimal overhead; open-source.
– Weaknesses: Pre-1.0 (API churn possible); no interactive operator widgets (buttons, toggles); no built-in cloud ingestion; best for post-mission or batch analysis.
– Use when: Developing perception algorithms; curating training data; analyzing failed missions; testing on diverse robot platforms.
Tool-fit decision tree:

Practical Recommendations
-
Hybrid approach: Use Rerun for development, testing, and dataset curation. Use RViz for live debugging during active development on a single robot. Use Foxglove for fleet operations and live dashboards.
-
Multi-language logging: If your robotics stack is polyglot (Python scripts, Rust perception, C++ legacy), Rerun’s cross-language SDKs let you log to a unified timeline without middleware translation.
-
Recording discipline: Make recording a first-class citizen. Every test run, every mission, every anomaly should produce a .rrd file. Over time, you build a searchable library of robot behavior—invaluable for ML labeling and regression testing.
-
Latency-sensitive systems: On robots with strict real-time requirements, Rerun’s logging overhead is minimal (microseconds per log call). The viewer runs on a separate machine, so visualization doesn’t slow the robot.
-
Version pinning: Since Rerun is pre-1.0, pin the SDK version in your
requirements.txtto avoid surprise API breakage in CI/CD.
FAQ
Q: Can I use Rerun without ROS?
A: Yes. Rerun is completely middleware-agnostic. Log from a Python script, a Rust binary, or a C++ app—with or without ROS. You provide the timestamps and entity paths; Rerun handles the rest.
Q: What’s the file size of a .rrd recording?
A: Roughly 10–100 MB per hour of multi-sensor data (Lidar, camera, IMU, scalars), depending on resolution and sampling rate. Compression is lossless via Arrow. Full-res camera streams add significant size; consider downsampling for replay-heavy workflows.
Q: Can I stream Rerun data to the cloud?
A: Yes. Use the TCP or gRPC transport to connect a remote viewer, or implement a custom sink to stream to cloud storage. However, Rerun’s design is offline-first; cloud ingestion and querying are not built in.
Q: Does Rerun work with ROS 1?
A: The official bridge targets ROS 2, but you can write a custom ROS 1 bridge in ~50 lines of Python using the Rerun SDK and rospy subscribers.
Q: Is Rerun suitable for real-time operator dashboards?
A: Not directly. Rerun has no panel widgets (buttons, sliders, text input), so you can’t send commands back to the robot. For operator dashboards, use Foxglove or a custom web app. Rerun is better for post-mission analysis and development workflows.
Further Reading
- ROS 2 Nav2 Autonomous Mobile Robot Warehouse Navigation
- ROS 2 Jazzy Jalisco Migration Guide from Humble (2026)
- DDS Data Distribution Service Protocol Complete Guide
- Rerun.io Documentation
- Rerun GitHub Repository
About the author: Riju is a robotics engineer and technologist focused on digital twins, observability, and data-driven robotics. She writes on IoT, robotics, and autonomous systems for iotdigitaltwinplm.com.
