Mental Models
Essential thinking tools for better decisions
All Cards in This Deck
Explanation
When you're trying to figure out why something happened and you have multiple possible explanations, the simplest one is usually correct. Named after medieval philosopher William of Ockham, this principle helps prevent overthinking. It doesn't mean complex explanations are always wrong, but you should rule out simple explanations before considering complicated ones.
Example
Your website crashes at 3pm daily. Complex theory: Sophisticated DDoS attack timed to business hours. Simple reality: That's when the daily backup runs and maxes out server resources. Netflix's 2008 outage wasn't hackers or distributed system failure—someone accidentally deleted a semicolon in a config file. Your coworker isn't responding to emails—they're not plotting against you, their spam filter caught your messages.
Explanation
Most people solve problems by copying what others have done before (reasoning by analogy). First principles thinking means breaking a problem down to its most fundamental truths and building up solutions from there. Elon Musk made this approach famous—it's how you escape conventional thinking and create breakthrough innovations.
Example
SpaceX: Everyone said rockets cost $65M. Musk asked: What's a rocket made of? Aluminum, titanium, copper, carbon fiber. Raw material cost? $2M. Why the 30x markup? Because that's how aerospace works. So SpaceX built rockets for 10x less. Uber: Taxis are expensive. Why? Medallion system, dispatch overhead, idle time. What if we removed all that? Just connect drivers directly to riders via phones.
Explanation
We naturally focus on successful examples because failures often disappear from view or don't get talked about. During World War II, the military wanted to add armor to planes where they saw bullet holes. Statistician Abraham Wald pointed out the flaw: they should armor where they didn't see holes, because planes hit in those spots didn't make it back to be examined.
Example
'College dropouts like Gates and Zuckerberg became billionaires!' But millions of dropouts struggle in poverty—they don't write books. 'This startup pivoted 3 times and succeeded!' But 99% that pivot 3 times die—and delete their blogs. Gym in January: See fit people, think gym works. Don't see the 90% who quit by February.
Explanation
We judge how likely something is to happen based on how easily we can think of examples, but this leads to systematic errors. Recent, dramatic, or emotional events are easier to remember, so we overestimate their probability. Nobel Prize winner Daniel Kahneman identified this mental shortcut and showed how news coverage and vivid stories distort our risk perception.
Example
After seeing news about shark attacks, beach seems dangerous. Reality: Coconuts kill more people than sharks. Friend gets food poisoning at restaurant, you avoid it forever. But you've eaten there 50 times safely. One publicized kidnapping makes parents paranoid, though crime is at historic lows.
Explanation
Most people only think about the immediate consequences of their decisions (first-order thinking). Second-order thinking means asking 'And then what happens after that?' and 'What will other people do in response?' This deeper thinking prevents unintended consequences and is what separates good decision-makers from poor ones.
Example
Cutting prices to beat competitor: 1st: Win customers. 2nd: Competitor cuts more, price war begins. 3rd: Both companies unprofitable, quality drops. Uber giving drivers bonuses: 1st: More drivers join. 2nd: Market floods, less rides per driver. 3rd: Drivers quit, need bigger bonuses. Working weekends to impress boss: 1st: More work done. 2nd: Expectation set, burnout builds. 3rd: Performance drops, health issues, quit.
Explanation
Any description or model of reality is necessarily simplified—the menu isn't the actual meal, the business plan isn't the actual business, and the weather forecast isn't the actual weather. We often forget this and mistake our simplified models for reality itself. Philosopher Alfred Korzybski warned: 'The map is not the territory.'
Example
GDP says economy is great, but people are struggling (map vs reality). Your code's documentation is perfect, but the actual code has bugs. Dating profile (map) vs actual person (territory). Company org chart shows clear hierarchy, reality has shadow power structures. Financial models showed mortgage securities were safe; 2008 proved otherwise.
Explanation
When something bad happens, it's usually due to incompetence, lack of information, or honest mistakes rather than malicious intent. People are generally trying their best within their constraints. Assuming evil motives creates unnecessary conflict and prevents you from addressing the real problems—poor systems, lack of training, or miscommunication.
Example
Boss scheduled meeting during your vacation: Not sabotage—forgot you're away. Coworker's harsh code review: Not personal attack—they just communicate bluntly. Friend didn't invite you: Not exclusion—assumed you were busy or genuinely forgot. IT deployed breaking change: Not incompetence—tested in wrong environment.
Explanation
Investor Warren Buffett teaches that you should stick to areas where you have genuine expertise and experience. Inside your circle of competence, you can make informed decisions. Outside it, you're essentially guessing. The size of your circle doesn't matter—what matters is honestly knowing where its boundaries are. Ego tempts us to think our circle is bigger than it really is.
Example
Buffett avoids tech stocks—outside his circle. He sticks to businesses he understands: insurance, retail, consumer goods. A surgeon shouldn't give legal advice. A programmer shouldn't design bridges. WeWork's founder tried to revolutionize education, farming, and living—all outside his competence. Result: $40B loss.
Explanation
If something non-perishable has been around for a long time, it will probably continue to exist for a long time. Shakespeare's plays have been performed for 400 years and will likely be performed for another 400. Last year's bestseller probably won't last another decade. Named after Lindy's restaurant in New York where comedians noticed that long-running shows tended to keep running.
Example
Shakespeare has been read for 400 years—will be read in 2424. This year's bestseller? Probably forgotten by 2030. COBOL (1959) still runs banks. The hot JavaScript framework from last year? Already deprecated. Restaurants open 50 years will likely last; the trendy spot that opened last month, probably not.
Explanation
This approach, named after mathematician Thomas Bayes, means starting with an initial belief about something, then gradually updating that belief as you gather new evidence. The key is being willing to change your mind when the evidence changes, while weighing new information based on how reliable and relevant it is.
Example
Your coworker is usually reliable (prior: 90% reliable). They miss a deadline. Don't immediately think they're unreliable. Update slightly (now 85%). If they miss three more, update significantly (now 60%). One great delivery brings it back up. Your beliefs evolve with evidence, not binary switches.
Explanation
Air Force Colonel John Boyd developed this decision-making process for fighter pilots, but it applies to any competitive situation. The key insight is that speed matters more than perfection—if you can observe what's happening, orient yourself to the situation, decide what to do, and act faster than your competition, you'll win even if your individual decisions aren't perfect.
Example
Startups vs corporations: Startup observes customer need Monday, orients Tuesday, decides Wednesday, ships Thursday. Corporation still scheduling the meeting. Netflix vs Blockbuster: Netflix cycled through DVD-by-mail, streaming, original content while Blockbuster was still deciding about late fees.
Explanation
Nobel Prize-winning physicist Richard Feynman believed that if you truly understand something, you should be able to explain it in simple terms that anyone can understand. The process of trying to teach something reveals gaps in your knowledge that you didn't know existed. If you can't explain it simply, you probably don't understand it as well as you think.
Example
Think you understand blockchain? Explain it to a 12-year-old. Can't? You memorized jargon, not concepts. Feynman taught complex physics using simple analogies: electrons as spinning plates, particles as rubber bands. If the expert can't explain simply, they're not an expert—they're a memorizer.
Explanation
Before changing or removing any existing system, rule, or process, first make sure you understand why it was created in the first place. Writer G.K. Chesterton pointed out that things that seem pointless or outdated usually exist because they solved a problem that you might not be aware of. People who don't understand this history often end up recreating the same problems.
Example
Developer: 'This validation check is useless, removing it.' Month later: database corrupted. The check prevented bad data from an integration they didn't know about. Company removes 'pointless' approval process. Result: fraud spike. The process was preventing embezzlement. Society abandons tradition. Discovers tradition solved problems they forgot existed.
Explanation
Toyota developed this technique to find root causes without jumping to conclusions. Each 'why' peels back a layer, revealing the real problem. Most people stop at symptoms (why #1 or #2) and miss the actual cause (why #4 or #5).
Example
Amazon's website crashed: Why? Database overloaded. Why? Too many product searches. Why? New recommendation algorithm queries 10x more. Why? No performance testing before deploy. Why? Deadline pressure skipped QA. ROOT CAUSE: Broken deployment process. FIX: Mandatory load testing. Another: Customers complaining about app. Why? It's slow. Why? Images take forever to load. Why? They're 5MB each. Why? Designer exports at print quality. Why? Nobody specified mobile requirements. ROOT: No design guidelines.
Explanation
Warren Buffett's business partner Charlie Munger loves this approach: instead of asking 'How can I succeed?' ask 'How can I guarantee failure?' then avoid doing those things. It's often easier to avoid obvious mistakes than to figure out the perfect strategy. Sometimes not being stupid is enough to look smart.
Example
Want a great marriage? List what destroys marriages: contempt, stonewalling, no communication, financial secrets. Don't do those. Want users to love your product? List what makes users rage-quit: slow loading, confusing UI, lost data, expensive pricing. Eliminate those first. Amazon works backwards from press releases: Write the launch announcement first, then build the product that deserves it.
Explanation
Most systems contain feedback loops that either amplify changes (reinforcing loops) or resist changes (balancing loops). Reinforcing loops create snowball effects where small changes grow larger, like how rich people get richer. Balancing loops maintain stability, like how a thermostat keeps temperature constant. Understanding these loops helps predict how systems will respond to changes.
Example
Social media: Popular posts get more views (reinforcing) → even more popular. But platform algorithms limit reach eventually (balancing). Housing prices: High prices → people leave → prices drop (balancing). Tech hubs: talent attracts companies → companies attract talent (reinforcing) → San Francisco.
Explanation
Compounding happens when the results of an action generate further results, which generate even more results. This creates exponential rather than linear growth. The key insight is that small, consistent improvements can lead to dramatic changes over time, while small negative changes can lead to dramatic deterioration.
Example
Money: $1000 at 10% annually becomes $2600 after 10 years, $6700 after 20 years. Reading: 30 minutes daily = 200+ books over 10 years = massive knowledge advantage. Habits: 1% better daily = 37x better over a year. Relationships: Small acts of kindness compound into deep trust.
Explanation
Nassim Taleb identified three categories: fragile things break under stress, robust things resist stress, and antifragile things actually get stronger from stress. Understanding this helps you build systems and make decisions that benefit from volatility and uncertainty rather than just surviving them.
Example
Muscles become stronger when stressed through exercise. Immune systems strengthen from exposure to mild pathogens. Startups with diverse revenue streams become more resilient during economic downturns. Decentralized systems become more robust when individual parts fail.
Explanation
When evaluating a specific situation, we tend to focus on the unique details and ignore the broader statistical reality (base rate). This leads to overconfidence in predictions and poor probability estimates. The base rate tells you what usually happens in similar situations, which is often the best predictor of what will happen in this specific case.
Example
90% of restaurants fail within 5 years (base rate), but every new restaurant owner thinks theirs will succeed because of their unique concept, location, or passion. 85% of startups fail, but founders focus on their specific advantages. Most diets fail long-term, but dieters focus on their personal motivation.
Explore More Decks
Get Full Access in the App
Unlock AI-powered conversations, create custom decks, and track your learning progress.
Download on App Store