Here's a mistake developers make all the time: calculating distances in application code. You pull every location from the database, loop through them in PHP, run Haversine formulas on lat/long pairs, and hope your server doesn't melt when the dataset grows. It works fine with 100 records. It falls apart at 10,000.
The fix isn't better code. It's moving the work to where it belongs: the database. PostgreSQL with PostGIS does spatial queries natively, with proper indexes, in logarithmic time. That means O(log N) lookups instead of O(N) loops. The difference between a query that takes 5 milliseconds and one that takes 5 seconds.
What PostGIS Actually Does
PostGIS is an extension that turns PostgreSQL into a spatial database. It adds geometry and geography types - meaning you can store points, lines, and polygons as native database objects, not just latitude and longitude floats.
More importantly, it adds spatial indexes. A GIST (Generalised Search Tree) index lets PostgreSQL find nearby points without scanning every row. It's the same principle as a B-tree index for text searches, but optimised for two-dimensional space.
When you write a proximity query in PostGIS, the database uses the index to narrow down candidates, then calculates exact distances only for the subset that matters. That's why it scales. The cost grows logarithmically with dataset size, not linearly.
The Laravel Implementation
In Laravel, you enable PostGIS with a migration that adds the extension, then define your location column as a geometry type. Instead of separate lat and long columns, you store a single POINT field. Queries look like this:
SELECT * FROM locations WHERE ST_DWithin(coordinates, ST_MakePoint(lng, lat), distance_in_meters);
That's it. The database handles the heavy lifting. You're not looping in PHP. You're not calculating Haversine distances for every row. You're asking PostgreSQL to find points within a radius, and it uses the spatial index to do it efficiently.
For a "find nearby restaurants" feature, this is the difference between a query that scans 50,000 rows and one that checks 50. The user gets results instantly. Your server handles 10x the traffic. All because you let the database do what databases are good at: filtering data before it reaches your application.
Why This Matters Beyond Distance
The principle applies to more than just proximity searches. Any time you're filtering or sorting data in application code that could be filtered in the database, you're doing it wrong. Databases are optimised for these operations. Your PHP script is not.
PostGIS handles more than distance. It does geometric containment - is this point inside this polygon? It does intersection tests - does this delivery route cross this traffic zone? It does nearest-neighbour searches - what's the closest hospital to this address? All indexed. All fast.
The broader lesson is about where computation happens. Application code is for business logic. Databases are for data operations. When you blur that line - when you pull data into PHP just to filter it and send less data back to the client - you're creating performance problems that don't need to exist.
The Practical Difference
Developers often avoid PostGIS because it feels like overkill. "I'm just finding nearby coffee shops. I don't need a spatial database." But the setup cost is minimal - install an extension, change a column type, add an index. The performance gain is not minimal. It's the difference between a feature that works and one that scales.
As your dataset grows, Haversine loops become a bottleneck. You start caching results. You start limiting query ranges. You start building workarounds for a problem that shouldn't exist. PostGIS solves it at the root. It's not sexy. It's just correct.
For any Laravel app dealing with location data - delivery services, store locators, property searches, event finders - PostGIS is the right tool. Not because it's cutting-edge. Because it's boring, reliable, and fast. That's what production infrastructure should be.