Your dashboard says "Live BTC Price: $68,420". Your competitor's dashboard says the same price. Your data updated 300 milliseconds ago. Theirs updated 45 seconds ago. You're both calling the same API. What's the difference?

Everything depends on how you define "live." In 2026, dashboards, data platforms, and AI agents all claim to show "real-time" data, but the latency between what you're displaying and what's actually happening in the world varies by orders of magnitude. Understanding where data sits on the real-time spectrum is the difference between a useful dashboard and expensive theater.

The spectrum: from milliseconds to minutes

Real-time doesn't exist. It's a spectrum.

At one end, WebSocket streams deliver updates every few hundred milliseconds. When you connect directly to a crypto exchange's WebSocket, you get price updates as fast as they're generated, typically 20-100 times per second depending on the exchange. Bitcoin's network broadcasts new blocks roughly every 10 minutes, but Binance's price tick stream updates hundreds of times per minute. This is as close to "real-time" as you can get on the public internet.

In the middle, REST API polling introduces latency. You call the endpoint, wait for a response, process it, then call again. The fastest practical polling interval is usually 1-2 seconds. If you poll every second, you miss everything that happens between polls. But a 1-second stale window is acceptable for many use cases. Stock prices move slowly enough that 1-second latency barely matters to casual traders. Weather data changes so slowly that 5-minute polling is still "real-time" in any meaningful sense.

At the other end of the spectrum, cached snapshots can be hours old. Your worker caches a response for 15 minutes before refreshing. Your data feed updates once per day. Your spreadsheet auto-refreshes every 4 hours. This isn't "real-time" in any technical sense, but it's still called "live data" because it's fresher than yesterday's cached version.

Most dashboards mix all three strategies on the same page. This is correct. Your earthquake feed doesn't need WebSocket latency. Your fear and greed index doesn't change every millisecond. But your Bitcoin price absolutely should stream live because that's what traders expect. Building a good dashboard means choosing the right latency for each data source.

Why latency matters: the cost of being behind

Imagine you're running a trading bot or an AI agent that responds to market data. You're looking at Bitcoin on the screen. It's $68,420. You decide to trade. You send your order. The order reaches the exchange. The exchange tells you: sorry, the price is now $68,450. Your bid is stale. Rejected.

This happens because of latency. The data you're looking at is behind the actual market state. The larger the latency, the wider the window for the price to move against you. For traders, every millisecond matters. For casual observers, 1 second is fine. For long-term trend analysis, 15 minutes is fine.

There's also a cost to latency in how you experience data. A dashboard that updates every 300ms feels responsive and alive. A dashboard that updates every 5 seconds feels sluggish. A dashboard that updates once per minute feels broken. Perceived responsiveness affects whether users trust your dashboard or use something else.

But there's also a cost to pushing latency too low. Updating every 100ms instead of every 300ms means 3x more network bandwidth, 3x more computation, and 3x more battery drain on phones. Every millisecond you shave off has a cost. The question is whether the benefit is worth it.

The architecture of TerminalFeed: choosing latency per source

TerminalFeed handles 20+ data sources, each with different latency requirements. Our approach is instructive.

WebSocket streaming: Bitcoin price uses a direct WebSocket connection to Binance. Updates arrive hundreds of times per second. We throttle to 1 update per second on desktop and 1 per 3 seconds on mobile. This gives users the feeling of "live" data without the battery drain of a thousand updates per second. Latency: 300-1000 milliseconds.

Server-Sent Events (SSE): Our earthquake and Wikipedia live edits panels use SSE streams. SSE is like WebSocket but simpler, one-way only, and works better over HTTP. Updates arrive when they happen, typically every few seconds. We buffer updates and flush every 500ms to batch rendering and avoid thrashing the DOM. Latency: 500-2000 milliseconds.

REST polling with per-endpoint caching: HackerNews, Reddit, GitHub trending, and most other feeds use our Cloudflare Worker as a cache layer. We call the upstream API once every 30-60 seconds, cache the response, and serve all visitors the cached data. A visitor's view is never more than 1 minute behind the source. But the Worker only calls upstream once per minute, not once per visitor request, so we hit the API 100x less often. Latency: 30-60 seconds. Cost: nearly free.

Long-lived cache: Fear and Greed Index, economic indicators, and prediction markets get called once every 5-15 minutes and cached aggressively. Users see data that's 5-15 minutes old. For these slow-moving metrics, this is perfectly fine. It looks like a dashboard glitch if it updates too frequently anyway. Users understand that unemployment reports don't change by the minute. Latency: 5-15 minutes. Cost: minimal.

Real architecture: TerminalFeed's API Worker routes every /api/* call through a cache layer with per-endpoint TTLs. BTC price uses WebSocket (1s refreshes). Earthquake data uses SSE. HackerNews polls every 60s and caches for visitors. Fear and Greed caches for 15 minutes. If an API fails, we return stale cache instead of an error. Result: users never see outages, bandwidth is minimal, and we stay well under rate limits on all upstream APIs.

The hidden cost: bandwidth and battery

Every millisecond of latency you remove costs bandwidth and processing power. A WebSocket connection to Binance that updates 100 times per second uses roughly 10x more data than a single REST call every second, for the same data. On desktop, this is negligible. On mobile, it's the difference between 8 hours and 6 hours of battery life.

The math is simple: more updates equals more network requests, more data transferred, more CPU to parse and render, more battery consumed. If your latency is 1 second but your users' phones are dead after 2 hours, you've optimized the wrong thing.

This is why TerminalFeed throttles WebSocket updates on mobile and doubles all polling intervals. A visitor on cellular data shouldn't pay for your desire for sub-second latency. Real dashboards adjust their refresh rate based on device, connection speed, and whether the tab is visible.

True real-time is rare and expensive

If you're building something that truly requires millisecond-level latency, you should know that it's expensive. Crypto exchanges run proprietary data centers close to where the exchanges run. They pay for direct fiber connections. They write in low-level languages that compile to fast machine code. They measure latency in single-digit milliseconds.

For a web-based dashboard? You can't do that. Your browser talks to your server, which talks to the upstream API. There are too many layers. You get JavaScript garbage collection pauses. Your server has network latency. The upstream API has latency. By the time data reaches the browser, at least 50-100 milliseconds have passed. Add human perception latency on top: humans can't perceive changes faster than 50-100 milliseconds anyway. The marginal benefit of optimizing below 100ms is zero.

What matters is consistency. A 500ms latency that's always 500ms feels better than a 100ms latency that's sometimes 100ms and sometimes 2000ms. Users notice jitter more than they notice absolute latency.

Making the tradeoff decision

When you're building a dashboard or a data feed, ask yourself these questions:

The best dashboards don't chase sub-second latency everywhere. They choose the right latency for each data source. Bitcoin price warrants WebSocket. Earthquake alerts warrant SSE. HackerNews warrants 60-second polling. Fear and Greed Index warrants 15-minute cache. Every choice is correct given the use case.

See how TerminalFeed mixes WebSocket, SSE, and polling to deliver live data efficiently. Explore the dashboard and API architecture.

Explore TerminalFeed Dashboard

In 2026, "real-time" has lost all meaning. We say "real-time data" to mean everything from Binance WebSocket ticks to the unemployment report from yesterday. The term is useless. What matters is: how old is this data when I look at it, and is that old enough for what I'm doing? Answer those questions honestly, and you'll build dashboards that are genuinely useful instead of just visually impressive.