Intelligence is foundation
Podcast Subscribe
Web Development Monday, 30 March 2026

Let the Server Do the Work

Share: LinkedIn
Let the Server Do the Work

Every few years, web development rediscovers a basic truth: the server is better at server things than the browser is. Polling APIs, caching data, handling reconnection logic - these are not jobs you want running in 47 open tabs across your users' machines.

Yet most real-time web apps still do exactly that. Every connected client polls the same API endpoint, parses the same JSON, and updates the same UI. It's wasteful, fragile, and makes your upstream API very unhappy.

This tutorial by Markus Eisele shows a better pattern: Server-Sent Events (SSE) with server-side polling and caching. The browser subscribes once. The server handles everything else. One upstream request serves hundreds of clients.

The Problem with Client-Side Polling

Here's the typical pattern for a real-time dashboard. You want to show live data - stock prices, server metrics, or in this case, the International Space Station's current position. The naive approach: every client polls the API every few seconds.

This works fine for one user. It falls apart at scale. If 1,000 people are watching your dashboard, you're hitting the upstream API 1,000 times. Most APIs rate-limit you long before you reach that number. And even if they don't, you're burning bandwidth and compute on duplicate work.

The smarter move is to flip the model. The server polls once. The server caches the result. The server pushes updates to every connected client when the data changes. Clients don't poll. They subscribe.

Server-Sent Events, Done Properly

SSE is the underrated sibling of WebSockets. It's simpler, more reliable, and perfect for one-way data streams. The browser opens a long-lived HTTP connection. The server sends updates whenever it has them. No handshake, no protocol negotiation, no binary framing. Just text over HTTP.

The tutorial uses Quarkus to build an SSE endpoint that tracks the ISS. The server polls NASA's API every 5 seconds, caches the position in an application-scoped bean, and broadcasts updates to all connected clients. Each client gets the same data at the same time. One API call. Hundreds of updates.

This is not a toy example. This is how you build production dashboards. The architecture handles unreliable upstream APIs, client disconnections, and variable load without breaking a sweat.

Caching That Actually Works

The key is application-scoped caching. The server stores the latest ISS position in memory. When a client connects, it gets the cached value immediately - no waiting for the next poll cycle. When new data arrives, the server updates the cache and pushes to all connected clients at once.

This pattern eliminates the cold-start problem. New clients don't see a blank screen while waiting for the first update. They get the most recent data instantly, then receive live updates as they happen.

It also decouples your app from the upstream API's reliability. If NASA's endpoint goes down for 30 seconds, your clients keep displaying the last known position. When it comes back, updates resume. No error messages. No broken UI. Just graceful degradation.

Why This Matters for Builders

Most real-time features don't need WebSockets. They need a server that owns the polling logic and a transport layer that pushes updates efficiently. SSE does both with less code and fewer failure modes than WebSockets.

For internal dashboards, this is immediately practical. Server metrics, deployment status, queue depths - anything that updates frequently but doesn't need sub-second latency. Move the polling server-side, cache aggressively, and push updates over SSE. You'll reduce API costs, simplify client code, and improve reliability in one move.

For customer-facing products, it changes the performance profile. Instead of every user hammering your backend, one scheduled job feeds everyone. Your server handles 1,000 concurrent SSE connections more easily than 1,000 concurrent API polls.

The Template for Real-Time

This pattern shows up everywhere once you start looking for it. Live sports scores. Stock tickers. Server monitoring. Delivery tracking. Anywhere you need "live" data that updates every few seconds, not every few milliseconds.

The browser is great at rendering updates. It's terrible at deciding when to fetch them. Let the server make that decision. Let the server own the relationship with upstream APIs. Let the server cache and distribute. The browser just subscribes and displays.

That's the architecture. One source of truth. One polling loop. Many subscribers. Simple, reliable, efficient. Exactly what real-time should be.

More Featured Insights

Artificial Intelligence
The AI Agent That Writes Its Own Operating Manual
Quantum Computing
The Hidden Variable in Quantum Error Correction

Today's Sources

Dev.to
I Built an AI Agent That Thinks in Notion (And Can Give His Brain a Makeover)
arXiv cs.AI
BeSafe-Bench: Unveiling Behavioral Safety Risks of Situated Agents in Functional Environments
TechCrunch
Why OpenAI really shut down Sora
OpenAI Blog
Helping disaster response teams turn AI into action across Asia
arXiv cs.AI
AutoB2G: LLM-Driven Agentic Framework For Automated Building-Grid Co-Simulation
arXiv cs.AI
Semi-Automated Knowledge Engineering and Process Mapping for Total Airport Management
arXiv – Quantum Physics
Decoder Dependence in Surface-Code Threshold Estimation with Native GKP Digitization
arXiv – Quantum Physics
Catalytic Coherence Amplification for Quantum State Recovery
arXiv – Quantum Physics
Typical entanglement in anyon chains: Page curves beyond Lie group symmetries
Dev.to
Build a Real-Time ISS Tracker with Quarkus, SSE, and Qute
InfoQ
FOSDEM 2026: Intro to WebTransport - the Next WebSocket?!
InfoQ
Google Unveils AppFunctions to Connect AI Agents and Android Apps
Dev.to
I Built an AI Agent That Thinks in Notion (And Can Give His Brain a Makeover)
InfoQ
Java News Roundup: GraalVM Build Tools, EclipseLink, Spring Milestones, Open Liberty, Quarkus
Dev.to
My first Python project: Excel to SQL pipeline (feedback welcome)

About the Curator

Richard Bland
Richard Bland
Founder, Marbl Codes

27+ years in software development, curating the tech news that matters.

Subscribe RSS Feed
View Full Digest Today's Intelligence
Free Daily Briefing

Start Every Morning Smarter

Luma curates the most important AI, quantum, and tech developments into a 5-minute morning briefing. Free, daily, no spam.

  • 8:00 AM Morning digest ready to listen
  • 1:00 PM Afternoon edition catches what you missed
  • 8:00 PM Daily roundup lands in your inbox

We respect your inbox. Unsubscribe anytime. Privacy Policy

© 2026 MEM Digital Ltd t/a Marbl Codes
About Sources Podcast Audio Privacy Cookies Terms Thou Art That
RSS Feed