Full description not available
D**N
Forecasting Is a Learnable Skill
Just like how Wall Street experts correctly forecast a market crash. If we predict that the stock market will decline by 20% every six months, we’ll eventually be right, but we don’t know exactly when.The authors’ research, funded by IARPA (Intelligence Advanced Research Projects Activity), found that most experts are barely better than random guessing when it comes to predicting world events. Still, a small subset of people, known as superforecasters, consistently outperform both experts and chance in forecasting outcomes, even without access to classified information or elite resources.Their findings show that superforecasting is not a talent, but rather a skill that can be learned. He summarized the composite portrait of a model superforecaster in chapter 8 of his book.A superforecaster tends to be:CAUTIOUS: Nothing is certainHUMBLE: Reality is infinitely complexNONDETERMINISTIC: What happens is not meant to be and does not have to happenIn their abilities and thinking styles, they tend to be:ACTIVELY OPEN-MINDED: Beliefs are hypotheses to be tested, not treasures to be protectedINTELLIGENT AND KNOWLEDGEABLE WITH A “NEED FOR COGNITION”: Intellectually curious, enjoy puzzles and mental challengesREFLECTIVE: Introspective and self-criticalNUMERATE: Comfortable with numbersIn their methods of forecasting, they tend to be:PRAGMATIC: Not wedded to any idea or agendaANALYTICAL: Capable of stepping back from the top-of-your-nose perspective and considering other viewsDRAGONFLY-EYED: Value diverse views and synthesize them into their ownPROBABILISTIC: Judge using many grades of maybeTHOUGHTFUL UPDATERS: When facts change, they change their mindsGOOD INTUITIVE PSYCHOLOGISTS: Aware of the value of checking for cognitive and emotional biasesIn their work ethics, they tend to have:A GROWTH MINDSET: Believe it’s possible to get betterGRIT: Determined to keep at it, however long it takesNot every superforecaster has every attribute, but the odds are better when the superforecasters work together as a group and complement each other.This is a great book for both strategic planning and making informed business decisions.
E**T
I'm a hedgehog.
I have been, and continue to be, a pundit. Upon reading this book I discovered that one of the reasons I'm somewhat good at punditry is because I am what Tetlock calls a "hedgehog."Hedgehogs tell tight, simple, clear stories that grab and hold audiences.Hedgehogs are confident. We organise our thoughts around "Big Ideas" and then we squeeze complex problems into our preferred cause-effect templates. Let's face it--hedgehogs make good pundits. The problem is that hedgehogs make terrible forecasters.I've noticed this myself. When predicting how the Supreme Court will decide a case, what a jury will decide, or how the public may respond to something, my forecasts are notoriously inaccurate.Reading Forecasting may not change my tactics in giving interviews (why mess with success) but I think it will affect how I write in the future.Concepts like Fermi estimates, outside vs. inside views, confirmation biases, and questioning basic, emotionally charged beliefs are not in my toolbox, but they will be now. If I am going to offer predictions I owe it to my readers to sharpen these skills.Easy reading? Not really. However, worth the effort.
J**R
Scientific approach to prediction
I really enjoyed this book a few years ago, and I have come back to offer a review based on my notes at the time and how the insights have settled for me over time. I took away many key concepts for successfully forecasting uncertain events and also some areas I noted for further exploration. Many of the following notes are structured from the authors' insight into the demonstrated practices of repeatedly successful forecasters.The book mentions repeatedly the importance of measurement for assessment and revising forecasts and programs. Many people simply don't create any metrics of anything when they make unverifiable and chronologically ambiguous declarations.The book emphasizes the importance of receiving this feedback on predictions that measurement allows, as there is a studied gap between confidence and skill in judgment. We have a tendency to be uninterested in accumulating counterfactuals, but we must know when we fail to learn from it. If forecasts are either not made or not quantified and ambiguous, we can't receive clear feedback, so the thought process that led to the forecasts can't be improved upon. Feedback, however, allows for the psychological trap of hindsight bias. This is that when we know the outcome, that knowledge of the outcome skews our perception of what we thought at the time of the prediction and before we knew the outcome.The main qualities for successful forecasting are being open-minded, careful, and undertaking self-critical thinking with focus, which is not effortless. Commitment to self-improvement is the strongest predictor of long-term performance in measured forecasting. This can basically be considered as equivalent to the popular concept of grit. Studies show that individuals with fixed mindsets do not pay attention to new information that could improve their future predictions. Similarly, forecasts tend to improve when more probabilistic thinking is embraced rather than fatalistic thinking in regards to the perspective that certain events are inevitable.A few interesting findings that the authors expand upon in more detail in the book: experience is important to have the tacit knowledge essential to the practice of forecasting, and that grit, or perseverance, towards making great forecasts is three times as important as intelligence.Practices to undertake when forecasting are to create a breakdown of components to the question that you can distinguish and scrutinize your assumptions; develop backwards thinking as answering the questions of what you would need to know to answer the question, and then making appropriate numerical estimations for those questions; practice developing an outside view, which is starting with an anchored view from past experience of others, at first downplaying the problem's uniqueness; explore other potential views regarding the question; and express all aspects and perspectives into a single number that can be manipulated and updated.Psychological traps to be aware of discussed in the book include confirmation bias, which is a willingness to seek out information that confirms your hypothesis and not seek out information that may contradict it, which is the opposite of discovering counterfactuals; belief perseverance, also known as cognitive dissonance, in which individuals can be incapable of updating their belief in the face of new evidence by rationalization in order to not have their belief upset; scope insensitivity, which is not properly factoring in an important aspect of applicability of scope, such as timeframe, properly into the forecast; and thought type replacement, which is replacing a hard question in analysis with a similar question that's not equivalent but which is much easier to answer.Researched qualities to strive for as a forecaster: cautious, humble, nondeterministic, actively open-minded, reflective, numerate, pragmatic, analytical, probabilistic, belief updaters, intuitive psychologists, growth mindset.The authors then delve into a bit of another practical perspective on forecasting, which involves teams. Psychological traps for teams include the known phenomenon known as groupthink, which is that small cohesive groups tend to unconsciously develop shared illusions and norms that are often biased in favor of the group, which interfere with critical thinking regarding objective reality. There is also a tendency for members of the group to leave the hard work of critical thinking to others on the team instead of sharing this work optimally, which when combined with groupthink, leads the group towards tending to feel a sense of completion upon reaching a level of agreement. One idea to keep in mind for management of a group is that the group's collective thinking can be described as a product of the communication of the group itself and not the sum of the thinking of the individual members of a group.There are some common perceived problems with forecasting, which receive attention in the book: the wrong side of maybe fallacy, which is the thinking that a forecast was bad because the forecast was greater than 50% but the event didn't occur, which can lead to forecasters not willing to be vulnerable with their forecasts; publishing forecasts for all to see, where research shows that public posting of forecasts, with one's name associated with the forecast, creates more open-mindedness and increased performance; and the fallacy that because many factors are unquantifiable due their real complexity, the use of numbers in forecasting is therefore not useful.Some concepts that I took note of for further research from the book were: Bayesian-based application for belief updating, which is basically a mathematical way of comparing how powerful your past belief was relative to some specific new information, chaos theory, game theory, Monte Carlo methods, and systematic intake of news media. These are concepts that I was particularly interested in from the book based on my own interests and that I have continued to explore. This book was very valuable for cohesively bringing together the above concepts in the context of a compelling story, based on the DARPA research project which was compellingly won by the author's team as a product of the research that led to this groundbreaking book.
Trustpilot
Hace 1 mes
Hace 3 semanas