Insights
Writing and research and stuff
Rethinking the plow, or redesigning the farm?
In 1923, Henry Ford owned 75% the American tractor market. By 1928—5 years later—he walked away a failure.
Problem was, Ford’s tractor was just a mechanized plow. He simply replaced the horse with an engine.
John Deere saw something Ford missed:…
Building an air force or defending the fleet?
In 1921, Billy Mitchell sank an ‘unsinkable’ battleship with an airplane. The Navy's response? Courtmartial him, and promote the admirals who said he couldn’t do it.
How many people does it take to say yes?
"A committee where 10 people have to say yes is the opposite of innovation." That's Coinbase CEO Brian Armstrong. Last year, one innovation bet netted them $1.35B in revenue.
Here's the system that made it possible—and why most companies could never replicate it.
Output’s up, vibes are down
Is AI making teams more robotic? Human-AI teams are 50% more productive. But, they’re about 25% less social. Output’s up, vibes are down.
New MIT/Johns Hopkins study: 2,234 workers. 11,000+ ads created. Ads tested with 5 million impressions on X.
Researchers found something most AI coverage skips…
The Mandala Effect & the Pony Express
The Pony Express ran for 18 months.
It shut down two days after the transcontinental telegraph was completed.
The riders knew it was coming—telegraph poles were going up the whole time they were riding. But they rode anyway. Because a letter that arrives in 10 days beats one that never arrives.
Trashcans & Astronauts
In 1961, an 11-year-old girl spent four days floating on a cork raft in the open Atlantic. No food, no water, no shelter from the sun.
The crew that finally found her said the hardest part wasn't the search area—it was spotting the raft. Neutral-colored equipment blended into the waves. A human being, drifting in millions of square miles of ocean, was nearly invisible.
That rescue changed everything.
This is stupid, let’s fix it
You don't need an AI strategy. You need wins. I spent 13 years at IDEO watching orgs make the same mistake with design thinking that they're making with AI right now:
They interview stakeholders.
They build a beautiful matrix of prioritized use cases.
Then they pilot whichever ones scored highest on the 2x2.
Six months and a few hundred thousand dollars later, they kill it.
Ripping out your kitchen before you hire a contractor
It takes zero imagination to eliminate a job.
It takes tremendous imagination to invent new ones.
Too many companies take the zero approach.
The bozos effect
One person plus AI equals a two-person team. Harvard proved it with 776 employees at P&G. I've started calling what happens next "the bozos effect." I see it with almost every client now. Give someone AI tools and within a few weeks they're quietly wondering why they need the rest of the team.
It's not imaginary. A Harvard/GitHub study found developers with Copilot shifted toward solo coding and away from collaboration. Not because anyone told them to, but because they could.
Your headshot predicts your paycheck
An AI looked at 96,000 LinkedIn photos. It predicted career success better than GPA, GMAT, or attractiveness. Combined.
New National Bureau of Economic Research study from The Wharton School and Yale University used AI to extract Big Five personality traits from MBA grads' headshots. Then tracked their careers.
Your workflows are infinitely long
Spain measured its border with Portugal at 987km. Portugal measured the same border at 1,214km. Neither was wrong.
In 1951, mathematician Lewis Fry Richardson discovered why:
the finer your ruler, the longer the measurement.
History’s worst traffic jam was caused by poor planning.
Companies spend millions on transformation roadmaps. Almost no one thinks through what happens to the org during the transformation. In 2010, China learned this the hard way—and it created the history's worst traffic jam.
Floating across the Atlantic
In 1999, four Italians stuffed two old cars with foam and tried to float across the Atlantic Ocean. It took 119 days. Two crew members quit. And the man whose dream started it all died before they reached the other side.
Too many agents spoil the broth
GPT-5 and Claude achieve 25% success working together on a coding task. A single agent doing both jobs? Roughly 50% success.
Stanford and SAP built a benchmark called CooperBench to test something basic: can two AI coding agents collaborate on the same codebase?
They created 652 tasks across 12 real open-source repositories. Each task gave two agents different features to implement—features that were logically compatible but required coordinating on shared code. The kind of thing any software team does every day.
Across all models, cooperation dropped success rates by 30% on average.
Researchers called it "the curse of coordination."
Results read like a dysfunctional team’s retro…
Death by consensus
Without strict design, multi-agent systems ignore expertise in favor of middling comrpomise.
Why is your AI so stuck?
Your company spent millions on AI. Nobody's using it. Three-quarters of companies are stuck. And they’re solving the wrong problem.
BCG surveyed 1,000 CxOs across 20 sectors. Only 26% got past proof-of-concept to generate actual value. BCG found that companies throw IT at operational problems. They throw change management at technical gaps. They blame "culture" when the workflow design is broken.
It's like trying to fix a car that won't start by repainting it, then blaming the driver for not turning the key hard enough.
For the past three years, I've been seeing the same repeating failure patterns. If something’s holding your team back, it’s probably some gross cocktail of these:
AI is getting smarter at math but dumber at choices.
New NBER study just dropped that changes how you should think about the agents you're building into your business.
What they did:
Researchers ran the most comprehensive behavioral economics test on AI to date—16 experiments originally designed to document human irrationality, applied to 12 frontier models across GPT, Claude, Gemini, and Llama families.
Same tests Kahneman and Tversky used to prove humans violate Expected Utility theory. Loss aversion. Probability weighting. Hyperbolic discounting. The classics.
What they found:
Advanced LLMs exhibit a split personality.
5 Lessons from Uber's Airport Forecasting (That Apply to Every Org)
5 Lessons from Uber's Airport Forecasting (That Apply to Every Org)
Uber has a problem at airports:
Too many drivers = wasted time waiting.
Too few = riders can't get cars.
Both kill the marketplace.
At peak times, drivers can wait 45+ minutes in queue. During slow periods, riders wait 20+ minutes for pickup. Neither side wins.
They published research on predicting demand and managing driver queues. Buried in the technical details were lessons every Product and Ops leader needs.
Innovation through the wrong end of the telescope
New study on AI use in 41.3M scientific studies: Scientists using AI publish 3x more papers, get 4.8x more citations, become team leaders 1.4 years earlier. But the breadth of subjects studied shrank 5%. Collaboration dropped 22%.
Everyone is sprinting toward the same finish line.
Now your innovation team has the same problem.
What are we actually selling?
What are we actually selling? Working on pricing for a CPG client the past few weeks, and the math keeps breaking the same way. Cost of goods is too high. Can't price competitively. Every model we run shows we're 30-40% above competitors.
Classic consultant answer?
"We need to lower COGS or we're dead."
But that's not the important question.
The question is: What are we actually selling?