5 Lessons from Uber's Airport Forecasting (That Apply to Every Org)

5 Lessons from Uber's Airport Forecasting (That Apply to Every Org)

Uber has a problem at airports:

Too many drivers = wasted time waiting.

Too few = riders can't get cars.

Both kill the marketplace.

At peak times, drivers can wait 45+ minutes in queue. During slow periods, riders wait 20+ minutes for pickup. Neither side wins.

They published research on predicting demand and managing driver queues. Buried in the technical details were lessons every Product and Ops leader needs.

— 01 —

The Two-Model Trap 

Uber originally built two models: one predicted demand, another estimated queue length. They stitched them together with math. But, it had terrible accuracy in the 0-15 minute window when drivers decide whether to enter the queue.

They rebuilt it as a single model.

Accuracy improved immediately.

Your version:

Sales forecasts demand.

Ops forecasts capacity.

Finance forecasts budget.

You reconcile quarterly in spreadsheets.

By then, the moment's passed.

— 02 —

Simple Beats Expensive 

Uber tested everything: classical statistics, random forests, deep learning. A simple statistical model from the 1970s beat out their most expensive deep learning approach. A 12.5% improvement came from better features, not fancier models.

Uber's research proves what most AI vendors won't tell you: simple statistical methods often beat expensive ML.

— 03 —

Context Beats Compute 

Uber feeds their models flight schedules, weather forecasts, events, strikes, transit disruptions. Not just historical trip data.

Your capacity planning probably ignores equivalent signals: competitor launches, regulatory changes, adjacent market shifts.

Too many teams are training on the past while the present is already different.

— 04 —

Most Data Problems Are Silo Problems 

New Year's Eve happens once a year per city. Not enough data to train models.

Uber's solution: Train one model on hundreds of cities simultaneously, and accuracy jumps.

You're prob doing the opposite: separate forecasts per product, per market, per segment. That failed EMEA launch? It's predicting your US roadmap. You're siloing learnings by business unit. Every failed pilot is a leading indicator. Every unexpected win is a pattern. But too few teams can see it because Product doesn't share data with Ops, and neither talks to Sales.

— 05 —

A 12.5% Boost Beats a 10x Transformation

Uber found 12.5% better forecast accuracy creates "direct business impact."

Meanwhile, your board wants 10x from AI.

Reality: 15% better forecasting

= 15% less idle capacity

+ 15% fewer stockouts

+ 15% better resource allocation.

That compounds.

The ROI is in precision, not transformation.

Most orgs chase AI moonshots while missing operational improvements in existing data.

What's one forecast in your org that's stitched together from multiple models?

Source: Forecasting Models to Improve Driver Availability at Airports, Zheng et al, Aug 2025

Previous
Previous

AI is getting smarter at math but dumber at choices.

Next
Next

Innovation through the wrong end of the telescope