Intelligence is foundation
Subscribe
  • Luma
  • About
  • Sources
  • Ecosystem
  • Nura
  • Marbl Codes
00:00
Contact
[email protected]
Connect
  • YouTube
  • LinkedIn
  • GitHub
Legal
Privacy Cookies Terms
  1. Home›
  2. Featured›
  3. Web Development›
  4. GitHub stars are broken - the real problem is we trusted them at all
Web Development Wednesday, 29 April 2026

GitHub stars are broken - the real problem is we trusted them at all

Share: LinkedIn
GitHub stars are broken - the real problem is we trusted them at all

GitHub stars were supposed to signal quality. High star count meant a project was useful, well-maintained, trusted by the community. Developers used stars to decide which library to adopt. Recruiters used them to evaluate candidates. The metric became a proxy for reputation.

Now projects buy stars. Services sell them by the thousand. Bots inflate counts overnight. The metric that developers relied on for over a decade is being systematically gamed, and the fraud is getting harder to detect.

But here's the uncomfortable truth: stars were never reliable. The fraud just made the underlying problem impossible to ignore.

How We Got Here

Stars started as a bookmarking feature. You starred a repository to save it for later. GitHub displayed star counts publicly, and the metric took on a life of its own. High stars meant social proof. Projects with thousands of stars attracted contributors, funding, and attention. Projects with dozens of stars languished in obscurity, regardless of technical merit.

The incentive structure was inevitable. Stars became currency, so people found ways to manufacture them. Early on, this was organic manipulation - developers asking friends to star their projects, or posting on forums to drum up support. Annoying but small-scale.

What changed is professionalisation. Services now sell stars as a product. You pay, they deliver, and the stars look legitimate enough to pass casual inspection. Bot accounts with realistic activity patterns. Gradual increases that mimic organic growth. The fraud adapted to detection methods faster than GitHub could respond.

Why Detection Doesn't Scale

GitHub could crack down on fake stars - ban bot accounts, flag suspicious activity spikes, verify user authenticity. But this becomes an arms race. Fraudsters adapt. Detection methods escalate. Legitimate users get caught in false positives. The harder GitHub tries to police stars, the more complex the fraud becomes, and the more collateral damage accumulates.

The deeper issue is that stars were never designed to bear this much weight. They're a lightweight interaction, not a rigorous quality signal. Trying to secure them retroactively is like trying to make a popularity contest into a peer-review system. The mechanism wasn't built for the load we placed on it.

Even if GitHub eliminated all fraud tomorrow, stars would still be a weak proxy. A project with 10,000 stars might be brilliant. Or it might be a well-marketed tutorial that hasn't been maintained in three years. Stars measure attention, not quality. We conflated the two because we needed a simple heuristic and stars were visible.

What Actually Matters

If stars don't work, what does? The article suggests dependency graphs. If 500 projects depend on your library in their package.json or requirements.txt, that's a signal. Those projects are betting their functionality on your code. That's harder to fake than stars and more meaningful than passive interest.

Contribution history matters. A project with active pull requests, regular commits, and responsive maintainers is healthier than one with 50,000 stars and no updates in six months. These signals require sustained effort to manipulate. They reflect actual development activity, not marketing.

Real-world usage matters most. If a library is deployed in production environments, referenced in technical documentation, or discussed in engineering blogs solving actual problems - that's validation. It's distributed, hard to game, and directly tied to utility.

The problem is these signals are harder to aggregate into a single number. Dependency counts vary by ecosystem. Contribution patterns look different across project types. Real-world usage is scattered and qualitative. We wanted stars to give us a simple answer. The reality is messy.

The Trust Collapse

Once a metric becomes a target, it stops being a good metric. That's Goodhart's Law, and GitHub stars are a textbook case. The moment stars became a proxy for credibility, the incentive to manipulate them appeared. The manipulation didn't break the system - it revealed that the system was fragile from the start.

This has implications beyond GitHub. Any visible metric used to make trust decisions will face the same pressure. Download counts. Follower numbers. Citation metrics. The pattern repeats: a signal emerges, people rely on it, gaming begins, the signal degrades. Fighting the gaming is expensive and often futile. The better move is to stop treating simple metrics as trustworthy in the first place.

What Developers Should Do

Stop using stars as a primary decision factor. Look at the commit history. Check when the last release shipped. Read the issues - are maintainers responsive, or is the backlog piling up unanswered? See if the project has active contributors beyond the original author, or if it's a one-person show with a single point of failure.

Check dependencies. Is this library used by projects you trust? If a well-known company or open-source project depends on it, that carries more weight than star count. Dependencies are harder to fake and represent actual technical decisions by other developers.

Run the code. Clone it. Read the source. Does it do what it claims? Is the code quality reasonable? This takes time, but it's the only reliable way to evaluate a library. Stars were never a substitute for actually looking at what you're about to integrate into your project.

The Bigger Picture

The star problem is a microcosm of a larger issue with online reputation systems. We want simple, legible signals to guide decisions. We create metrics that aggregate complex reality into a single number. Then we're surprised when people game the metric, and frustrated when fighting the gaming proves impossible.

The answer isn't better fraud detection. It's accepting that no simple metric will capture quality, and building systems that don't rely on one. Multiple signals. Contextual evaluation. Transparency about limitations. It's messier than a star count. It's also more honest about what we actually know.

GitHub stars are broken. They've probably been broken for years. The bots just made it obvious. The fix isn't technical - it's cultural. Stop trusting simple metrics to make complex decisions. Do the work to evaluate what you're depending on. And accept that if a signal is easy to read, it's probably easy to fake.

More Featured Insights

Artificial Intelligence
MIT cuts federated learning memory by 80% - AI training now fits on a smartwatch
Quantum Computing
Power-efficient chip brings post-quantum security to pacemakers and implants

Today's Sources

MIT AI News
MIT accelerates federated learning on edge devices by 81%
arXiv cs.AI
PExA achieves 70.2% accuracy on text-to-SQL with parallel exploration
arXiv cs.AI
Power-law data distribution outperforms uniform training for compositional reasoning
arXiv cs.AI
Multi-fidelity digital twin detects aircraft engine faults with 96% F1-score
AI Business News
Meta scales AI infrastructure with major AWS chip partnership
Quantum Zeitgeist
MIT chip cuts power consumption for post-quantum cryptography
Phys.org Quantum Physics
Light can be shaped in empty space without materials or lenses
Quantum Zeitgeist
Quantum Fourier Transform enables resource-efficient machine learning design
Quantum Zeitgeist
SAS survey: uncertainty about applications now exceeds cost concerns for quantum AI
arXiv – Quantum Physics
Generalized uncertainty relations map interpolation between q-deformed and fractional mechanics
Dev.to
GitHub stars became a market-reputation signals decay when they're monetizable
Dev.to
Supabase RLS patterns for multi-tenant SaaS: push security to the database
InfoQ
Slack manages context in long-running agents with structured memory, not chat logs
Dev.to
Real-time anomaly detection engine for cloud storage using sliding windows and Python
Hacker News
Bugs Rust won't catch-understanding the limits of memory safety
Elementor
AI website builders: execution tool, not strategy replacement

About the Curator

Richard Bland
Richard Bland
Founder, Marbl Codes

27+ years in software development, curating the tech news that matters.

Subscribe RSS Feed
View Full Digest Today's Intelligence
Richard Bland
About Sources Privacy Cookies Terms Thou Art That
MEM Digital Ltd t/a Marbl Codes
Co. 13753194 (England & Wales)
VAT: 400325657
3-4 Brittens Court, Clifton Reynes, Olney, MK46 5LG
© 2026 MEM Digital Ltd