Realtime Vehicle Telemetry Dashboard Using ClickHouse and JS Components
Build a realtime vehicle telemetry dashboard with ClickHouse and performant JS components—step‑by‑step: ingest, OLAP schema, streaming API, virtualized UI.
Ship a realtime vehicle telemetry dashboard that scales — ClickHouse + performant JS components
Hook: If your team is losing hours building UI for high‑cardinality telemetry and your backend can’t keep up with bursty vehicle streams, this guide gives you a production‑ready pattern: ClickHouse as the OLAP sink plus virtualized JS tables and streaming charts that render millions of rows per hour.
The short story (what you’ll end up with)
By following this step‑by‑step tutorial you will:
- Ingest high‑throughput telemetry (WebSocket → Node ingest → ClickHouse) with safe, batched inserts.
- Model ClickHouse for low‑latency analytics (MergeTree partitioning, order keys, codecs, TTLs).
- Expose a small realtime API that streams latest aggregates via WebSocket for UI clients.
- Implement a fast frontend: virtualized JS tables and streaming charts (uPlot / lightweight-charts) with examples in React, Vue, vanilla JS and a Web Component.
- Benchmark tips and production hardening for 2026 workloads.
Why ClickHouse in 2026 for telemetry?
ClickHouse continued to expand its role as a low‑latency OLAP engine for high cardinality time series through late 2025 and into 2026. With major growth in both self‑managed and cloud offerings, ClickHouse excels where you need fast aggregations across massive event volumes while maintaining cost efficiency compared to some cloud OLAP vendors.
Important trends in 2026:
- Wider adoption of ClickHouse Cloud for low ops teams.
- Improvements in streaming ingestion paths (Kafka, HTTP, native TCP) and materialized views for pre‑aggregation.
- Shift to push most UI work into lightweight, GPU‑friendly drawing (WebGL, canvas) and virtualization to keep UIs responsive even with continual updates.
Architecture overview
High level components:
- Vehicle devices (or simulators) → WebSocket/HTTP POST to an ingest gateway.
- Node.js ingest service buffers and batches to ClickHouse via HTTP INSERT (JSONEachRow / CSV).
- ClickHouse stores raw telemetry in a MergeTree table and maintains materialized views for common aggregates (per vehicle, per region, per minute).
- API layer exposes two paths: fast OLAP queries (on demand) and a lightweight WebSocket server that publishes streaming aggregates to UIs.
- Client: virtualized data table + streaming chart components subscribe and render.
Diagram (text)
Device → Ingest WS → Ingest Service → ClickHouse ⟷ Realtime API WS → Browser UI (virtualized table + streaming chart)
Step 1 — ClickHouse schema and ingestion patterns
Design goals: fast inserts, small partitions for targeted queries, low storage overhead, predictable compaction. Use MergeTree with sensible order and partition keys.
Recommended DDL (2026 best practices)
CREATE TABLE telemetry_raw (
event_time DateTime64(3) ,
vehicle_id String,
lat Float64,
lon Float64,
speed Float32,
heading Float32,
fuel Float32,
metadata String
)
ENGINE = MergeTree()
PARTITION BY toYYYYMM(event_time)
ORDER BY (vehicle_id, event_time)
SETTINGS index_granularity = 8192;
Notes: Partition by month (or day if you have heavy volumes) to limit scans. Order by (vehicle_id, event_time) gives efficient single‑vehicle time range reads. Use DateTime64(3) for millisecond precision. Use ZSTD codec at insert time via INSERT options if needed.
Materialized view for downsampled streaming aggregates
CREATE MATERIALIZED VIEW telemetry_minute
TO telemetry_minute_agg AS
SELECT
toStartOfMinute(event_time) AS minute,
vehicle_id,
any(lat) AS lat,
any(lon) AS lon,
avg(speed) AS avg_speed,
max(speed) AS max_speed,
count() AS events
FROM telemetry_raw
GROUP BY minute, vehicle_id;
CREATE TABLE telemetry_minute_agg (
minute DateTime,
vehicle_id String,
lat Float64,
lon Float64,
avg_speed Float32,
max_speed Float32,
events UInt64
) ENGINE = MergeTree()
PARTITION BY toYYYYMM(minute)
ORDER BY (vehicle_id, minute);
Materialized views keep UI queries fast by precomputing 1m aggregates. For sub‑second dashboards you can use smaller buckets, but materialized aggregates at 1s/10s are more storage heavy.
Step 2 — Ingest service: safe, batched writes
Do not INSERT one row at a time. Batch inserts (hundreds to thousands of rows) and use concurrent writers. Node example below uses the ClickHouse HTTP insert endpoint with JSONEachRow.
Node.js ingest example (WebSocket listener & ClickHouse batch insert)
/* ingest.js */
const WebSocket = require('ws');
const fetch = require('node-fetch');
const CLICKHOUSE_URL = 'http://clickhouse-host:8123';
const BATCH_SIZE = 500;
const FLUSH_INTERVAL_MS = 1000;
const buffer = [];
async function flush() {
if (!buffer.length) return;
const rows = buffer.splice(0, buffer.length);
const body = rows.map(r => JSON.stringify(r)).join('\n');
await fetch(`${CLICKHOUSE_URL}/?query=INSERT%20INTO%20telemetry_raw%20FORMAT%20JSONEachRow`, {
method: 'POST',
body,
headers: { 'Content-Type': 'application/json' }
});
}
setInterval(flush, FLUSH_INTERVAL_MS);
const wss = new WebSocket.Server({ port: 8081 });
wss.on('connection', ws => {
ws.on('message', msg => {
// assume incoming msg is JSON telemetry
try {
const data = JSON.parse(msg);
buffer.push(data);
if (buffer.length >= BATCH_SIZE) flush();
} catch (e) { /* handle parse error */ }
});
});
Production tips:
- Use multiple ingest workers behind a load balancer — ClickHouse handles concurrent inserts well.
- Enable insert compression and tune max_insert_block_size.
- Consider Kafka engine + materialized views if you need at‑least‑once processing and replay semantics.
Step 3 — Realtime API and streaming to clients
The UI needs two things: on‑demand OLAP queries and a live stream of recent events/aggregates. Keep the web socket protocol minimal: subscribe to vehicle sets, receive periodic aggregate messages.
Minimal realtime API pattern
- Client opens WebSocket to API server, sends { action: 'subscribe', vehicles: ['v1','v2'] }.
- Server maps subscription to accelerators: query telemetry_minute_agg for last N minutes and push a warm snapshot.
- Ingest service or ClickHouse materialized views emit new aggregates; server publishes deltas to subscribers.
/* realtime-publisher.js (concept) */
// After inserting, the ingest service can publish to a Redis channel or internal queue
// API server consumes and forwards to WebSocket clients with matching subscriptions.
// Pseudo: ingest after insert -> publish('agg_channel', {vehicle_id, minute, avg_speed})
// API server subscribes and forwards to clients.
Why not query ClickHouse for every update? Frequent OLAP queries are expensive. Instead, publish only the aggregates that change. Use materialized views + small pub/sub to keep UI latency < 500ms.
Step 4 — Frontend: virtualized table + streaming chart
The frontend stack can be React, Vue, or vanilla. Key techniques: virtualization for large lists, incremental patch updates (immutable diffs), and GPU‑friendly drawing (canvas / WebGL) for charts.
React: virtualized table (react-window) + streaming chart (uPlot)
react-window gives O(rendered rows) DOM nodes. uPlot is tiny and extremely fast for line charts.
// React: Virtualized list skeleton (JSX)
import { FixedSizeList as List } from 'react-window';
import uPlot from 'uplot';
function Row({ index, style, data }) {
const item = data[index];
return (
<div style={style}>
<strong>{item.vehicle_id}</strong> {item.avg_speed} km/h
</div>
);
}
function TelemetryTable({ rows }) {
return (
<List height={600} itemCount={rows.length} itemSize={35} itemData={rows}>
{Row}
</List>
);
}
// For chart: push new points into uPlot series and call setData without re-rendering React
Key pattern: keep chart rendering outside React render cycle; call the chart's API to append points.
Vue 3: virtualization + streaming
Use vue-virtual-scroller or similar. Same pattern: keep heavy drawing out of reactive DOM updates.
Vanilla JS + Web Component example
Create a small custom element that subscribes to the realtime WS and updates a canvas chart and a virtualized view. Virtualization in vanilla means recycling a small pool of DOM rows.
// webcomponent-telemetry.js (simplified)
class TelemetryList extends HTMLElement {
constructor(){
super();
this.attachShadow({mode:'open'});
this.container = document.createElement('div');
this.container.style.height = '600px';
this.shadowRoot.appendChild(this.container);
this.buffer = [];
this.renderWindow = 50;
}
connectedCallback(){
this.ws = new WebSocket('wss://api.example/realtime');
this.ws.onmessage = e => {
const msg = JSON.parse(e.data);
// append and update visible rows
this.buffer.unshift(msg);
this._render();
};
}
_render(){
this.container.innerHTML = '';
for(let i=0;i<Math.min(this.buffer.length,this.renderWindow);i++){
const r = document.createElement('div');
r.textContent = `${this.buffer[i].vehicle_id} ${this.buffer[i].avg_speed}`;
this.container.appendChild(r);
}
}
}
customElements.define('telemetry-list', TelemetryList);
Step 5 — Performance, cost and benchmark tips
Plan for thousands of vehicles generating JSON at 1–5 events/sec. Use these rules of thumb:
- Batch writes: 500–5000 rows per insert reduces CPU and improves compression.
- Partitioning: choose day/month based on retention and query patterns.
- Order key: choose columns you frequently filter by (vehicle_id then event_time).
- Compression: ZSTD level 3–5 balances CPU and disk.
- Materialized views: precompute the frequent aggregates for the UI.
- Backpressure: put a queue (Kafka/Redis) between ingest and ClickHouse if upstream bursts risk dropping events.
Simple benchmark approach: generate synthetic telemetry at target rate and measure end‑to‑end latency (device → UI update). Track ClickHouse insert latency, merge queue backlog, and API publish interval. Increase batch sizes until insert latency stabilizes without exceeding acceptable UI latency.
Security, licensing and operational concerns (What 2026 teams ask)
- Secure the ClickHouse HTTP endpoint (mutual TLS, network ACLs). Do not expose it directly to clients.
- Sanitize and validate telemetry on ingest side; untrusted metadata fields should be stored separately or limited.
- Track ClickHouse server metrics and tune
max_memory_usagefor the merges and queries — think observability and governance as in observability-first designs. - Use role‑based access and make sure any third‑party JS components have compatible licenses for commercial use.
Advanced strategies and future‑proofing (2026+)
As OLAP engines evolve and browser CPUs shift, consider these advanced strategies:
- Edge aggregation — compress and pre‑aggregate telemetry at the edge (gateway or device) to reduce write volume.
- Adaptive sampling — increase sampling for low‑priority vehicles during bursts, full fidelity for flagged ones.
- Vectorized query caching — use ClickHouse cache layers or materialized views to serve commonly requested spans without recomputation.
- Micro-edge instances — for million‑point series use WebGL charting libraries or write a thin WebGL layer for better rendering throughput and lower latency close to devices.
Common pitfalls and how to avoid them
- Inserting single rows (never) — batch inserts always.
- Using broad ORDER BY that forces full table scans — prefer vehicle_id + time patterns.
- Driving UI updates from full table fetches — prefer deltas or aggregated streams.
- Over‑reactive React renders — keep heavy drawing outside React and use virtualization.
“Move work where it belongs: heavy aggregations to ClickHouse, rendering to the client’s GPU, and routing logic to a small, observable API layer.”
Example queries you’ll use
-- latest N points for a vehicle
SELECT event_time, speed, lat, lon
FROM telemetry_raw
WHERE vehicle_id = 'vehicle_42'
ORDER BY event_time DESC
LIMIT 1000;
-- top 10 vehicles by average speed in last 15 minutes
SELECT vehicle_id, avg(speed) AS avg_speed
FROM telemetry_raw
WHERE event_time > now() - INTERVAL 15 MINUTE
GROUP BY vehicle_id
ORDER BY avg_speed DESC
LIMIT 10;
Actionable checklist (get this working quickly)
- Provision ClickHouse (self‑managed or cloud). Create telemetry_raw and telemetry_minute_agg tables. Consider managed/cloud options and cost case studies like cloud cost/ops when sizing.
- Spin up the Node ingest service (paste ingest.js), route device traffic to it.
- Set up a lightweight pub/sub (Redis) for aggregates or let ingest publish directly to API WS.
- Wire up a simple frontend using the provided React virtualized table + uPlot example and subscribe to the realtime WS.
- Run load tests with synthetic traffic and tune batch sizes, partitions and materialized view aggregations.
Takeaways
- ClickHouse is a 2026‑ready OLAP backend for high‑throughput telemetry, when you design partitions, order keys and materialized views correctly.
- Frontend responsiveness relies on virtualization and streaming — avoid wholesale re‑renders and prefer incremental updates.
- Batch on write, stream on read: batch inserts to ClickHouse, publish lightweight aggregates to clients.
Where to go next and resources
- ClickHouse official docs: table engines, materialized views, and HTTP insert patterns (search ClickHouse docs for recent 2025–2026 updates).
- react-window / react-virtualized for large lists.
- uPlot or lightweight-charts for streaming charts in production UIs.
Call to action
If you want a runnable starter: clone the repo with the ingest service, ClickHouse DDL, and React/Vue/vanilla examples (includes Docker Compose to run ClickHouse locally and a traffic simulator). Try a full end‑to‑end run with 10k vehicles in simulated mode and share your benchmark results for configuration tuning.
Ready to ship: Download the starter, run the provided load tests, and iterate on retention and aggregation windows to balance cost and latency for your fleet.
Related Reading
- Observability‑First Risk Lakehouse: Cost‑Aware Query Governance & Real‑Time Visualizations for Insurers
- The Evolution of Cloud VPS in 2026: Micro‑Edge Instances for Latency‑Sensitive Apps
- How to Build an Incident Response Playbook for Cloud Recovery Teams (2026)
- Edge‑First Layouts in 2026: Shipping Pixel‑Accurate Experiences with Less Bandwidth
- How Startups Cut Costs and Grew Engagement with Bitbox.Cloud in 2026 — A Case Study
- Smart Lamp Steals: Is the Govee RGBIC Lamp Worth It at This Discount?
- Why a Booming Economy Could Mean Higher Profits — and Higher Costs — for Plumbers in 2026
- Which Wireless Headphones Are Safe for Home Use? Privacy-Focused Buying Guide
- Managing Fandom Backlash: How to Cover Controversial Franchise Changes Without Burning Your Brand
- Case Study: How a Model Backed the Bears — Reconstructing a Successful Divisional Round Bet
Related Topics
javascripts
Contributor
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you
From Our Network
Trending stories across our publication group