This is a book summary of The Great Mental Models Volume 1: General Thinking Concepts by Shane Parrish of Farnam Street (Amazon).
- All content in quotation marks is from the author unless otherwise stated.
- All content not in quotation marks is paraphrased from original quotes.
- I’ve added emphasis in bold for readability/skimmability.
Book Summary Contents: Click a link here to jump to a section below
- About Farnam Street & The Book
- What are Mental Models?
- The Map is not the Territory
- Circle of Competence
- First Principles Thinking
- Thought Experiment
- Second-Order Thinking
- Probabilistic Thinking
- Occam’s Razor
- Hanlon’s Razor
- Bonus: 4 Supporting Ideas
An Intro to General Thinking Concepts: The Great Mental Models Volume 1 by Farnam Street (Book Summary)
About Farnam Street & The Great Mental Models
“Farnam Street is devoted to helping you develop an understanding of how the world really works, make better decisions, and live a better life. We address such topics as mental models, decision-making, learning, reading, and the art of living.”
About Farnam Street (founded by Shane Parrish):
- “In a world full of noise, Farnam Street is a place where you can step back and think about time-tested ideas while asking yourself questions that lead to meaningful understanding. We cover ideas from science and the humanities that will not only expand your intellectual horizons but also help you connect ideas, think in multidisciplinary ways, and explore meaning.”
- “I started writing about my learnings, the result being the Farnam Street website. The last eight years of my life have been devoted to identifying and learning the mental models that have the greatest positive impact, and trying to understand how we think, how we update, how we learn, and how we can make better decisions.”
Shane Parrish acknowledges that he’s standing on the shoulders of giants:
- “This book wouldn’t be possible without the insights of others. The goal of Farnam Street is to master the best of what other people have figured out. True to this mission, everything in here began with someone else.”
- “The ideas in these volumes are not my own, nor do I deserve any credit for them … I’ve only curated, edited, and shaped the work of others before me.”
- “If you want to suck up someone’s brain, you should simply read a book. All the great wisdom of humanity is written down somewhere.”
- “These pages are filled with ideas from many great minds. It’s only fair to point out that any idea you found in this book comes from someone else.”
About The Great Mental Models:
- “When I first started learning about multidisciplinary thinking there was no source of collected wisdom—no place where I could find the big ideas from multiple disciplines in one place. This book, and indeed, this series, is meant to bring some of the big invariant ideas into the world to help us better understand the world we live in and how it interconnects.“
- “The Great Mental Models Project is a labor of love to help equalize opportunity in the world by making a high-quality, multidisciplinary, interconnected education free and available to everyone.”
- “In identifying the Great Mental Models we have looked for elementary principles, the ideas from multiple disciplines that form a time-tested foundation.”
- “The Great Mental Models are not just theory. They are actionable insights that can be used to effect positive change in your life.”
What are Mental Models?
“When you learn to see the world as it is, and not as you want it to be, everything changes. The solution to any problem becomes more apparent when you can view it through more than one lens. You’ll be able to spot opportunities you couldn’t see before, avoid costly mistakes that may be holding you back, and begin to make meaningful progress in your life. That’s the power of mental models.”
On mental models:
- “Mental models describe the way the world works. They shape how we think, how we understand, and how we form beliefs. Largely subconscious, mental models operate below the surface. We’re not generally aware of them and yet they’re the reason when we look at a problem we consider some factors relevant and others irrelevant. They are how we infer causality, match patterns, and draw analogies. They are how we think and reason.”
- “A mental model is simply a representation of how something works. We cannot keep all of the details of the world in our brains, so we use models to simplify the complex into understandable and organizable chunks. Whether we realize it or not, we then use these models every day to think, decide, and understand our world.”
- “Being able to draw on a repertoire of mental models can help us minimize risk by understanding the forces that are at play. Likely consequences don’t have to be a mystery.”
It all comes down to better contact with reality:
- “In life and business, the person with the fewest blind spots wins. Removing blind spots means we see, interact with, and move closer to understanding reality.“
- “Removing blind spots means thinking through the problem using different lenses or models. When we do this the blind spots slowly go away and we gain an understanding of the problem. We’re much like the blind men in the classic parable of the elephant, going through life trying to explain everything through one limited lens.”
- “The more lenses used on a given problem, the more of reality reveals itself. The more of reality we see, the more we understand. The more we understand, the more we know what to do.”
- “Understand the interconnections of the world, and see it for how it really is. This understanding allows us to develop causal relationships, which allow us to match patterns, which allow us to draw analogies. All of this so we can navigate reality with more clarity and comprehension of the real dynamics involved.”
- “Better models mean better thinking. The degree to which our models accurately explain reality is the degree to which they improve our thinking.”
- “We need to work hard at synthesizing across the borders of our knowledge, and most importantly, synthesizing all of the ideas we learn with reality itself.”
- “Only by repeated testing of our models against reality and being open to feedback can we update our understanding of the world and change our thinking.”
Our failures to update from interacting with reality spring primarily from three things:
- Not having the right perspective or vantage point: “We have a hard time seeing any system that we are in.”
- Ego-induced denial: “Many of us tend to have too much invested in our opinions of ourselves to see the world’s feedback—the feedback we need to update our beliefs about reality.”
- Distance from the consequences of our decisions: “The further we are from the results of our decisions, the easier it is to keep our current views rather than update them.”
Ultimately, the goal is to build a latticework of mental models in your head:
- “The quality of our thinking is largely influenced by the mental models in our heads.”
- “Exactly the same sort of pattern that graces backyards everywhere, a lattice is a series of points that connect to and reinforce each other. The Great Models can be understood in the same way—models influence and interact with each other to create a structure that can be used to evaluate and understand ideas.”
- “Munger has a way of thinking through problems using what he calls a broad latticework of mental models. These are chunks of knowledge from different disciplines that can be simplified and applied to better understand the world. The way he describes it, they help identify what information is relevant in any given situation, and the most reasonable parameters to work in.”
- “A latticework is an excellent way to conceptualize mental models, because it demonstrates the reality and value of interconnecting knowledge. The world does not isolate itself into discrete disciplines. We only break it down that way because it makes it easier to study it. But once we learn something, we need to put it back into the complex system in which it occurs. We need to see where it connects to other bits of knowledge, to build our understanding of the whole. This is the value of putting the knowledge contained in mental models into a latticework.”
The Map is not the Territory
“The map of reality is not reality. Even the best maps are imperfect. That’s because they are reductions of what they represent.”
About the map is not the territory mental model:
- “In other words, the description of the thing is not the thing itself. The model is not reality. The abstraction is not the abstracted.”
- “The only way we can navigate the complexity of reality is through some sort of abstraction.”
- “Models are most useful when we consider them in the context they were created.”
- “We must use some model of the world in order to simplify it and therefore interact with it. We cannot explore every bit of territory for ourselves. We can use maps to guide us, but we must not let them prevent us from discovering new territory or updating our existing maps.”
- “We can’t use maps as dogma. Maps and models are not meant to live forever as static references. The world is dynamic. As territories change, our tools to navigate them must be flexible to handle a wide variety of situations or adapt to the changing times. If the value of a map or model is related to its ability to predict or explain, then it needs to represent reality. If reality has changed the map must change.”
In order to use a map or model as accurately as possible, we should take three important considerations into account:
- Reality is the ultimate update: “We can and should update them based on our own experiences in the territory. That’s how good maps are built: feedback loops created by explorers.”
- Consider the cartographer: “Maps are not purely objective creations. They reflect the values, standards, and limitations of their creators.”
- Maps can influence territories.
The map is not the territory watch-outs:
- “We run into problems when our knowledge becomes of the map, rather than the actual underlying territory it describes.”
- “In using maps, abstractions, and models, we must always be wise to their limitations. They are, by definition, reductions of something far more complex. There is always at least an element of subjectivity, and we need to remember that they are created at particular moments in time.”
- “A map captures a territory at a moment in time. Just because it might have done a good job at depicting what was, there is no guarantee that it depicts what is there now or what will be there in the future.”
Circle of Competence
“When ego and not competence drives what we undertake, we have blind spots. If you know what you understand, you know where you have an edge over others. When you are honest about where your knowledge is lacking you know where you are vulnerable and where you can improve. Understanding your circle of competence improves decision-making and outcomes.”
About the circle of competence mental model:
- “Within our circles of competence, we know exactly what we don’t know. We are able to make decisions quickly and relatively accurately. We possess detailed knowledge of additional information we might need to make a decision with full understanding, or even what information is unobtainable. We know what is knowable and what is unknowable and can distinguish between the two.”
- “Critically, we must keep in mind that our circles of competence extend only so far. There are boundaries on the areas in which we develop the ability to make accurate decisions. In any given situation, there are people who have a circle, who have put in the time and effort to really understand the information. It is also important to remember that no one can have a circle of competence that encompasses everything. There is only so much you can know with great depth of understanding. This is why being able to identify your circle, and knowing how to move around outside of it, is so important.”
- “There is no shortcut to understanding. Building a circle of competence takes years of experience, of making mistakes, and of actively seeking out better methods of practice and thought.”
Three key practices needed in order to build and maintain a circle of competence:
- Curiosity and a desire to learn: “Learning comes when experience meets reflection. You can learn from your own experiences. Or you can learn from the experience of others, through books, articles, and conversations. Learning everything on your own is costly and slow. You are one person. Learning from the experiences of others is much more productive. You need to always approach your circle with curiosity, seeking out information that can help you expand and strengthen it.”
- Monitoring: “You need to monitor your track record in areas which you have, or want to have, a circle of competence. And you need to have the courage to monitor honestly so the feedback can be used to your advantage.”
- Feedback: “You must occasionally solicit external feedback. This helps build a circle, but is also critical for maintaining one.”
Circle of competence watch-outs:
- “It is extremely difficult to maintain a circle of competence without an outside perspective. We usually have too many biases to solely rely on our own observations. It takes courage to solicit external feedback, so if defensiveness starts to manifest, focus on the result you hope to achieve.”
- “One of the essential requirements of a circle of competence is that you can never take it for granted. You can’t operate as if a circle of competence is a static thing, that once attained is attained for life. The world is dynamic. Knowledge gets updated, and so too must your circle.”
First Principles Thinking
“First principles thinking is one of the best ways to reverse-engineer complicated situations and unleash creative possibility. Sometimes called reasoning from first principles, it’s a tool to help clarify complicated problems by separating the underlying ideas or facts from any assumptions based on them. What remain are the essentials.”
About the first principles thinking mental model:
- “First principles thinking identifies the elements that are, in the context of any given situation, non-reducible.”
- “Reasoning from first principles allows us to step outside of history and conventional wisdom and see what is possible. When you really understand the principles at work, you can decide if the existing methods make sense. Often they don’t.”
- “Thinking through first principles is a way of taking off the blinders. Most things suddenly seem more possible.”
- “If we want to identify the principles in a situation to cut through the dogma and the shared belief, there are two techniques we can use: Socratic questioning and the Five Whys.”
Socratic questioning can be used to establish first principles through stringent analysis:
- Clarifying your thinking and explaining the origins of your ideas: Why do I think this? What exactly do I think?
- Challenging assumptions: How do I know this is true? What if I thought the opposite?
- Looking for evidence: How can I back this up? What are the sources?
- Considering alternative perspectives: What might others think? How do I know I am correct?
- Examining consequences and implications: What if I am wrong? What are the consequences if I am?
- Questioning the original questions: Why did I think that? Was I correct? What conclusions can I draw from the reasoning process?
The goal of the Five Whys is to land on a “what” or “how”:
- “It is not about introspection, such as ‘Why do I feel like this?’ Rather, it is about systematically delving further into a statement or concept so that you can separate reliable knowledge from assumption. If your ‘whys’ result in a statement of falsifiable fact, you have hit a first principle. If they end up with a ‘because I said so’ or ‘it just is’, you know you have landed on an assumption that may be based on popular opinion, cultural myth, or dogma. These are not first principles.”
“Thought experiments can be defined as ‘devices of the imagination used to investigate the nature of things.'”
About the thought experiment mental model:
- “Thought experiments are powerful because they help us learn from our mistakes and avoid future ones. They let us take on the impossible, evaluate the potential consequences of our actions, and re-examine history to make better decisions. They can help us both figure out what we really want, and the best way to get there.”
- “Thought experiments tell you about the limits of what you know and the limits of what you should attempt. In order to improve our decision-making and increase our chances of success, we must be willing to probe all of the possibilities we can think of.”
- “Its chief value is that it lets us do things in our heads we cannot do in real life, and so explore situations from more angles than we can physically examine and test for.”
- “Thought experiments are more than daydreaming. They require the same rigor as a traditional experiment in order to be useful.”
- “One of the real powers of the thought experiment is that there is no limit to the number of times you can change a variable to see if it influences the outcome.”
Much like the scientific method, a thought experiment generally has the following steps:
- Ask a question.
- Conduct background research.
- Construct hypothesis.
- Test with (thought) experiments.
- Analyze outcomes and draw conclusions.
- Compare to hypothesis and adjust accordingly (new question, etc.).
A few areas in which thought experiments are tremendously useful:
- Imagining physical impossibilities.
- Re-imagining history.
- Intuiting the non-intuitive.
“Almost everyone can anticipate the immediate results of their actions. This type of first-order thinking is easy and safe but it’s also a way to ensure you get the same results that everyone else gets. Second-order thinking is thinking farther ahead and thinking holistically.“
About the second-order thinking mental model:
- “It requires us to not only consider our actions and their immediate consequences, but the subsequent effects of those actions as well. Failing to consider the second- and third-order effects can unleash disaster.”
- “We don’t make decisions in a vacuum and we can’t get something for nothing. When making choices, considering consequences can help us avoid future problems. We must ask ourselves the critical question: And then what? Consequences come in many varieties, some more tangible than others. Thinking in terms of the system in which you are operating will allow you to see that your consequences have consequences.“
- “Very often, the second level of effects is not considered until it’s too late. This concept is often referred to as the ‘Law of Unintended Consequences’ for this very reason.”
- “Any comprehensive thought process considers the effects of the effects as seriously as possible.”
- “High degrees of connections make second-order thinking all the more critical, because denser webs of relationships make it easier for actions to have far-reaching consequences.”
Two areas where second-order thinking can be used to great benefit:
- Prioritizing long-term interests over immediate gains: “Being aware of second-order consequences and using them to guide your decision-making may mean the short term is less spectacular, but the payoffs for the long term can be enormous. By delaying gratification now, you will save time in the future.”
- Constructing effective arguments.
Second-order thinking watch-outs:
- “Second-order thinking, as valuable as it is, must be tempered in one important way: You can’t let it lead to the paralysis of the Slippery Slope Effect, the idea that if we start with action A, everything after is a slippery slope down to hell, with a chain of consequences B, C, D, E, and F.”
- “Second-order thinking needs to evaluate the most likely effects and their most likely consequences, checking our understanding of what the typical results of our actions will be. If we worried about all possible effects of effects of our actions, we would likely never do anything, and we’d be wrong. How you’ll balance the need for higher-order thinking with practical, limiting judgment must be taken on a case-by-case basis.”
“Probabilistic thinking is essentially trying to estimate, using some tools of math and logic, the likelihood of any specific outcome coming to pass.”
About the probabilistic thinking mental model:
- “In a world where each moment is determined by an infinitely complex set of factors, probabilistic thinking helps us identify the most likely outcomes. When we know these our decisions can be more precise and effective.”
- “Our lack of perfect information about the world gives rise to all of probability theory, and its usefulness. We know now that the future is inherently unpredictable because not all variables can be known and even the smallest error imaginable in our data very quickly throws off our predictions. The best we can do is estimate the future by generating realistic, useful probabilities.”
- “Successfully thinking in shades of probability means roughly identifying what matters, coming up with a sense of the odds, doing a check on our assumptions, and then making a decision. We can act with a higher level of certainty in complex, unpredictable situations. We can never know the future with exact precision. Probabilistic thinking is an extremely useful tool to evaluate how the world will most likely look so that we can effectively strategize.”
Three concepts to know:
- Bayesian thinking (or Bayesian updating): “Given that we have limited but useful information about the world, and are constantly encountering new information, we should probably take into account what we already know when we learn something new. As much of it as possible. Bayesian thinking allows us to use all relevant prior information in making decisions. Statisticians might call it a base rate, taking in outside information about past situations like the one you’re in … It is important to remember that priors themselves are probability estimates. For each bit of prior knowledge, you are not putting it in a binary structure, saying it is true or not. You’re assigning it a probability of being true. Therefore, you can’t let your priors get in the way of processing new knowledge. In Bayesian terms, this is called the likelihood ratio or the Bayes factor. Any new information you encounter that challenges a prior simply means that the probability of that prior being true may be reduced. Eventually some priors are replaced completely. This is an ongoing cycle of challenging and validating what you believe you know.”
- Fat-tailed curves: “In a bell curve the extremes are predictable. There can only be so much deviation from the mean. In a fat-tailed curve there is no real cap on extreme events … The more extreme events that are possible, the longer the tails of the curve get. Any one extreme event is still unlikely, but the sheer number of options means that we can’t rely on the most common outcomes as representing the average. The more extreme events that are possible, the higher the probability that one of them will occur. Crazy things are definitely going to happen, and we have no way of identifying when.”
- Asymmetries: “You need to think about something we might call ‘metaprobability’—the probability that your probability estimates themselves are any good.”
Two more ideas to know:
- Orders of Magnitude: “Nassim Taleb puts his finger in the right place when he points out our naive use of probabilities. In The Black Swan, he argues that any small error in measuring the risk of an extreme event can mean we’re not just slightly off, but way off—off by orders of magnitude, in fact. In other words, not just 10% wrong but ten times wrong, or 100 times wrong, or 1,000 times wrong. Something we thought could only happen every 1,000 years might be likely to happen in any given year! This is using false prior information and results in us underestimating the probability of the future distribution being different.”
- Anti-fragility: “We can think about three categories of objects: Ones that are harmed by volatility and unpredictability, ones that are neutral to volatility and unpredictability, and finally, ones that benefit from it. The latter category is antifragile—like a package that wants to be mishandled. Up to a point, certain things benefit from volatility, and that’s how we want to be. Why? Because the world is fundamentally unpredictable and volatile, and large events—panics, crashes, wars, bubbles, and so on—tend to have a disproportionate impact on outcomes.”
“The root of inversion is ‘invert,’ which means to upend or turn upside down. As a thinking tool it means approaching a situation from the opposite end of the natural starting point.”
About the inversion mental model:
- “Most of us tend to think one way about a problem: forward. Inversion allows us to flip the problem around and think backward.”
- “Whatever angle you choose to approach your problem from, you need to then follow with consideration of the opposite angle. Think about not only what you could do to solve a problem, but what you could do to make it worse—and then avoid doing that, or eliminate the conditions that perpetuate it.”
Two approaches to applying inversion in your life:
- “Start by assuming that what you’re trying to prove is either true or false, then show what else would have to be true.”
- “Instead of aiming directly for your goal, think deeply about what you want to avoid and then see what options are left over.”
Psychologist Kurt Lewin’s process (force field analysis):
- Identify the problem.
- Define your objective.
- Identify the forces that support change towards your objective.
- Identify the forces that impede change towards the objective.
- Strategize a solution! This may involve both augmenting or adding to the forces in step 3, and reducing or eliminating the forces in step 4.
“Simpler explanations are more likely to be true than complicated ones. This is the essence of Occam’s Razor, a classic principle of logic and problem-solving.”
About the Occam’s razor mental model:
- “Named after the medieval logician William of Ockham, Occam’s Razor is a general rule by which we select among competing explanations. Ockham wrote that ‘a plurality is not to be posited without necessity’—essentially that we should prefer the simplest explanation with the fewest moving parts. They are easier to falsify, easier to understand, and generally more likely to be correct.”
- “Occam’s Razor is not an iron law but a tendency and a mind-frame you can choose to use: If all else is equal, that is if two competing models both have equal explanatory power, it’s more likely that the simple solution suffices.”
- “Occam’s Razor is a great tool for avoiding unnecessary complexity by helping you identify and commit to the simplest explanation possible.”
Why are more complicated explanations less likely to be true?
- “Take two competing explanations, each of which seem to equally explain a given phenomenon. If one of them requires the interaction of three variables and the other the interaction of thirty variables, all of which must have occurred to arrive at the stated conclusion, which of these is more likely to be in error? If each variable has a 99% chance of being correct, the first explanation is only 3% likely to be wrong. The second, more complex explanation, is about nine times as likely to be wrong, or 26%. The simpler explanation is more robust in the face of uncertainty.”
Occam’s razor watch-outs:
- “One important counter to Occam’s Razor is the difficult truth that some things are simply not that simple.”
- “Simple as we wish things were, irreducible complexity, like simplicity, is a part of our reality. Therefore, we can’t use this Razor to create artificial simplicity. If something cannot be broken down any further, we must deal with it as it is.”
“Hanlon’s Razor states that we should not attribute to malice that which is more easily explained by stupidity.”
About the Hanlon’s razor mental model:
- “Failing to prioritize stupidity over malice causes things like paranoia. Always assuming malice puts you at the center of everyone else’s world. This is an incredibly self-centered approach to life. In reality, for every act of malice, there is almost certainly far more ignorance, stupidity, and laziness.”
- “Hanlon’s Razor demonstrates that there are fewer true villains than you might suppose—what people are is human, and like you, all humans make mistakes and fall into traps of laziness, bad thinking, and bad incentives.”
- “This model reminds us that people do make mistakes. It demands that we ask if there is another reasonable explanation for the events that have occurred. The explanation most likely to be right is the one that contains the least amount of intent.”
- “By not generally assuming that bad results are the fault of a bad actor, we look for options instead of missing opportunities.”
- “Hanlon’s Razor, when practiced diligently as a counter to confirmation bias, empowers us, and gives us far more realistic and effective options for remedying bad situations.”
Hanlon’s razor watch-out:
- “As useful as it can be, it is, however, important not to overthink this model. Hanlon’s Razor is meant to help us perceive stupidity or error, and their inadvertent consequences. It says that of all possible motives behind an action, the ones that require the least amount of energy to execute (such as ignorance or laziness) are more likely to occur than one that requires active malice.”
Bonus: 4 Supporting Ideas
1. Three Buckets of Knowledge
3. Necessity and Sufficiency
4. Causation vs Correlation
1. Three Buckets of Knowledge:
“The larger and more relevant the sample size, the more reliable the model based on it is.”
Peter Kaufman says:
- “Every statistician knows that a large, relevant sample size is their best friend. What are the three largest, most relevant sample sizes for identifying universal principles? Bucket number one is inorganic systems, which are 13.7 billion years in size. It’s all the laws of math and physics, the entire physical universe. Bucket number two is organic systems, 3.5 billion years of biology on Earth. And bucket number three is human history, you can pick your own number, I picked 20,000 years of recorded human behavior. Those are the three largest sample sizes we can access and the most relevant.”
“Falsification is the opposite of verification; you must try to show the theory is incorrect, and if you fail to do so, you actually strengthen it.”
- “Science requires testability.”
- “The idea here is that if you can’t prove something wrong, you can’t really prove it right either.”
- “A good theory must have an element of risk to it—namely, it has to risk being wrong. It must be able to be proven wrong under stated conditions.”
- “Applying the filter of falsifiability helps us sort through which theories are more robust. If they can’t ever be proven false because we have no way of testing them, then the best we can do is try to determine their probability of being true.”
Karl Popper says:
- “A theory is part of empirical science if and only if it conflicts with possible experiences and is therefore in principle falsifiable by experience.”
- “If observation shows that the predicted effect is definitely absent, then the theory is simply refuted.”
3. Necessity and Sufficiency:
“We often make the mistake of assuming that having some necessary conditions in place means that we have all of the sufficient conditions in place for our desired event or effect to occur.”
- “What’s not obvious is that the gap between what is necessary to succeed and what is sufficient is often luck, chance, or some other factor beyond your direct control.”
- “In mathematics they call these sets. The set of conditions necessary to become successful is a part of the set that is sufficient to become successful. But the sufficient set itself is far larger than the necessary set. Without that distinction, it’s too easy for us to be misled by the wrong stories.”
4. Causation vs Correlation:
“We notice two things happening at the same time (correlation) and mistakenly conclude that one causes the other (causation). We then often act upon that erroneous conclusion, making decisions that can have immense influence across our lives.”
- “Confusion between these two terms often leads to a lot of inaccurate assumptions about the way the world works.”
- “Whenever correlation is imperfect, extremes will soften over time. The best will always appear to get worse and the worst will appear to get better, regardless of any additional action. This is called regression to the mean, and it means we have to be extra careful when diagnosing causation.”
You May Also Enjoy:
- See all book summaries.
- Systems Thinking & Understanding how Everything Connects: “Thinking in Systems” by Donella Meadows (Book Summary)
Leave a Reply