Every few years, web development rediscovers a basic truth: the server is better at server things than the browser is. Polling APIs, caching data, handling reconnection logic - these are not jobs you want running in 47 open tabs across your users' machines.
Yet most real-time web apps still do exactly that. Every connected client polls the same API endpoint, parses the same JSON, and updates the same UI. It's wasteful, fragile, and makes your upstream API very unhappy.
This tutorial by Markus Eisele shows a better pattern: Server-Sent Events (SSE) with server-side polling and caching. The browser subscribes once. The server handles everything else. One upstream request serves hundreds of clients.
The Problem with Client-Side Polling
Here's the typical pattern for a real-time dashboard. You want to show live data - stock prices, server metrics, or in this case, the International Space Station's current position. The naive approach: every client polls the API every few seconds.
This works fine for one user. It falls apart at scale. If 1,000 people are watching your dashboard, you're hitting the upstream API 1,000 times. Most APIs rate-limit you long before you reach that number. And even if they don't, you're burning bandwidth and compute on duplicate work.
The smarter move is to flip the model. The server polls once. The server caches the result. The server pushes updates to every connected client when the data changes. Clients don't poll. They subscribe.
Server-Sent Events, Done Properly
SSE is the underrated sibling of WebSockets. It's simpler, more reliable, and perfect for one-way data streams. The browser opens a long-lived HTTP connection. The server sends updates whenever it has them. No handshake, no protocol negotiation, no binary framing. Just text over HTTP.
The tutorial uses Quarkus to build an SSE endpoint that tracks the ISS. The server polls NASA's API every 5 seconds, caches the position in an application-scoped bean, and broadcasts updates to all connected clients. Each client gets the same data at the same time. One API call. Hundreds of updates.
This is not a toy example. This is how you build production dashboards. The architecture handles unreliable upstream APIs, client disconnections, and variable load without breaking a sweat.
Caching That Actually Works
The key is application-scoped caching. The server stores the latest ISS position in memory. When a client connects, it gets the cached value immediately - no waiting for the next poll cycle. When new data arrives, the server updates the cache and pushes to all connected clients at once.
This pattern eliminates the cold-start problem. New clients don't see a blank screen while waiting for the first update. They get the most recent data instantly, then receive live updates as they happen.
It also decouples your app from the upstream API's reliability. If NASA's endpoint goes down for 30 seconds, your clients keep displaying the last known position. When it comes back, updates resume. No error messages. No broken UI. Just graceful degradation.
Why This Matters for Builders
Most real-time features don't need WebSockets. They need a server that owns the polling logic and a transport layer that pushes updates efficiently. SSE does both with less code and fewer failure modes than WebSockets.
For internal dashboards, this is immediately practical. Server metrics, deployment status, queue depths - anything that updates frequently but doesn't need sub-second latency. Move the polling server-side, cache aggressively, and push updates over SSE. You'll reduce API costs, simplify client code, and improve reliability in one move.
For customer-facing products, it changes the performance profile. Instead of every user hammering your backend, one scheduled job feeds everyone. Your server handles 1,000 concurrent SSE connections more easily than 1,000 concurrent API polls.
The Template for Real-Time
This pattern shows up everywhere once you start looking for it. Live sports scores. Stock tickers. Server monitoring. Delivery tracking. Anywhere you need "live" data that updates every few seconds, not every few milliseconds.
The browser is great at rendering updates. It's terrible at deciding when to fetch them. Let the server make that decision. Let the server own the relationship with upstream APIs. Let the server cache and distribute. The browser just subscribes and displays.
That's the architecture. One source of truth. One polling loop. Many subscribers. Simple, reliable, efficient. Exactly what real-time should be.