Intelligence is foundation
Podcast Subscribe
Builders & Makers Sunday, 22 March 2026

Building a health coach that actually learns: Next.js, Supabase, and the data nobody trusts

Share: LinkedIn
Building a health coach that actually learns: Next.js, Supabase, and the data nobody trusts

Wearable health data is famously messy. Whoop tracks recovery differently than Oura. Garmin's sleep stages don't map cleanly to Apple Watch metrics. Every device has its own API, its own definitions, and its own idea of what counts as "good" sleep or "high" strain. For anyone trying to build a unified view of personal health, this fragmentation is the first problem. Markus Baier built ViQO to solve it - and learned that normalising the data is only half the challenge.

ViQO is a web app that connects to multiple wearables, pulls metrics like sleep quality, readiness, and strain, then uses statistical analysis to find patterns specific to you. Not generic advice about what works for most people, but correlations in your own data. Does your readiness score predict tomorrow's workout quality? Does sleep duration matter more than sleep stages for your recovery? The app calculates Pearson correlations across your metrics, surfaces the strongest relationships, and uses GPT to explain what they mean in plain language.

The architecture: normalised data and self-calibrating predictions

Baier built ViQO on Next.js for the frontend and Supabase for backend infrastructure - PostgreSQL database, authentication, and real-time subscriptions. The entire stack costs $15 per month to run, which is possible because inference is local where it can be and API-based only when necessary.

The core challenge was normalisation. Oura's readiness score runs 0-100. Whoop's recovery percentage is conceptually similar but calculated differently. Garmin's Body Battery adds another scale. ViQO doesn't try to convert these into a universal metric - that would lose information. Instead, it stores raw values alongside device-specific metadata, then applies z-score normalisation when calculating correlations. This preserves the relative movement in each metric while making cross-device comparisons mathematically valid.

The correlation engine runs Pearson calculations between every pair of metrics - sleep vs readiness, strain vs recovery, nutrition vs performance. It flags relationships above a 0.6 threshold (strong positive or negative correlation) and surfaces them in the UI. But here's where it gets interesting: the app doesn't just show you the correlation. It tracks prediction accuracy over time and adjusts confidence scores based on how often the pattern holds.

If your data says sleep duration correlates with next-day readiness at 0.75, ViQO will predict tomorrow's readiness based on last night's sleep. Then it checks if the prediction was accurate. If it consistently underpredicts or overpredicts, the model self-calibrates - adjusting weights or flagging that this correlation might be spurious. This isn't machine learning in the training sense. It's statistical feedback that prevents the system from over-relying on patterns that don't generalise.

GPT as the coaching layer

The statistical engine finds patterns. GPT makes them useful. When ViQO surfaces a correlation, it sends the data to GPT with structured context: the metrics involved, the correlation strength, recent trends, and the user's goals. GPT returns coaching advice in natural language - not generic tips, but specific suggestions tied to what the data shows.

Example: if strain correlates negatively with next-day readiness (you push hard, recovery suffers), GPT might suggest spacing high-strain days or prioritising sleep on heavy training days. If sleep stages show no correlation with readiness but duration does, it'll focus on time in bed rather than optimising REM or deep sleep percentages.

This is where the "AI health coach" framing earns itself. The advice isn't coming from a rules engine or a lookup table. It's synthesised from your specific patterns, expressed conversationally, and updated as your data changes. The limitation is verification - GPT can hallucinate causal relationships or oversimplify complex biology. Baier handles this by constraining prompts to observed correlations only and surfacing the raw data alongside the advice. Users can see what the AI is basing its recommendations on.

GDPR-by-design infrastructure

Health data is sensitive, and ViQO treats it that way from the architecture up. All personal metrics are stored in Supabase with row-level security - users can only query their own data, enforced at the database level. API keys for wearable integrations are encrypted at rest. Data sent to GPT is anonymised - no names, no identifying details, just metric values and timestamps.

This isn't GDPR compliance as an afterthought. It's designed into the data model. Because Supabase handles auth and permissions natively, there's no need for a custom backend to enforce access control. The database does it. Because API calls to OpenAI don't include user IDs, there's no linkage between coaching responses and individuals. The design makes it structurally difficult to leak data, even accidentally.

What this enables

The technical stack matters because it determines what's possible at low cost. Running this on AWS with a custom backend would cost hundreds per month and require ongoing maintenance. Supabase abstracts the infrastructure. Next.js handles frontend and API routes in one deployment. GPT adds the coaching layer without needing to train a model. The result is a solo developer shipping a production app that handles real health data, runs statistical analysis, and provides personalised advice - for the price of a decent lunch.

That's the unlock. Not the specific tools, but the fact that the tools exist at this cost point. Baier's technical breakdown shows what one person can build in 2025 if they pick the right abstractions and don't try to reinvent the parts that are already solved. Normalising wearable data is still hard. Calculating meaningful correlations is still hard. But hosting, auth, databases, and LLM inference? Those are solved problems now. Commoditised infrastructure.

The interesting question is what happens when a hundred developers build variations of this. Not clones of ViQO, but apps that solve adjacent problems - meal timing, workout programming, supplement tracking - using the same architecture. Supabase plus GPT plus Next.js is becoming a pattern. The apps built on that pattern are just starting to show what's possible when the infrastructure cost drops to near zero.

More Featured Insights

Robotics & Automation
Two continents, two rulebooks: The regulatory split holding robotics back
Voices & Thought Leaders
Tokens as electricity: Why inference costs matter more than training costs now

Today's Sources

DEV.to AI
How I built an AI health coach with Next.js, Supabase & GPT-5.2 - from wearable APIs to recovery predictions
DEV.to AI
Turning GitHub Copilot CLI into an AI Agent via ACP
Hacker News Best
Professional video editing, right in the browser with WebGPU and WASM
Hacker News Best
Floci - A free, open-source local AWS emulator
Hacker News Best
The three pillars of JavaScript bloat
Towards Data Science
Escaping the SQL Jungle
DEV.to AI
Automate or Stagnate: AI-Powered Customs for Southeast Asia Sellers
DEV.to AI
Amazon Q in Practice: How AI Is Transforming My AWS Workflow Between the Console and VS Code
Towards Data Science
Building a Navier-Stokes Solver in Python from Scratch: Simulating Airflow
The Robot Report
The great robot race: How companies can balance speed to market and compliance in the U.S.
Robohub
Robot Talk Episode 149 - Robot safety and security, with Krystal Mattich
The Robot Report
Allient to present new generation of mobile robot drive systems at LogiMAT
The Robot Report
How offline programming reduces machining automation deployment times
DEV.to AI
From Pixels to Physicality: Engineering Olaf with Reinforcement Learning, Control Systems, and Illusion Design
Azeem Azhar
🔮 Exponential View #566: A solar shield; AI agents; human judgment; China's robots++
Sebastian Raschka
A Visual Guide to Attention Variants in Modern LLMs

About the Curator

Richard Bland
Richard Bland
Founder, Marbl Codes

27+ years in software development, curating the tech news that matters.

Subscribe RSS Feed
View Full Digest Today's Intelligence
Free Daily Briefing

Start Every Morning Smarter

Luma curates the most important AI, quantum, and tech developments into a 5-minute morning briefing. Free, daily, no spam.

  • 8:00 AM Morning digest ready to listen
  • 1:00 PM Afternoon edition catches what you missed
  • 8:00 PM Daily roundup lands in your inbox

We respect your inbox. Unsubscribe anytime. Privacy Policy

© 2026 MEM Digital Ltd t/a Marbl Codes
About Sources Podcast Audio Privacy Cookies Terms Thou Art That
RSS Feed