Link Search Menu Expand Document

Lecture 17: Statistical Street Fighting - Tricks that Work

Yan Zhang

Another type of “street fighting” tool that (as far as I know) are not well-studied in academic literature is specific questions you can ask yourself to make progress on your forecasts. Often as practitioners we can ask ourselves questions that help us check blind spots or put ourselves in the state of mind to make better predictions.

From my own experience, and after conversations with more experienced forecasters, I selected some questions that seem particularly valuable for doing predictions:

  1. Am I believing this because it is what I want to believe, or is it what I think is true?
  2. Did I read the question / resolution criterion carefully?
  3. Would I bet [a nontrivial amount of money] on this?
  4. What would [person X] do?
  5. Do I have a good reason to believe that I am more accurate than the crowd?
  6. What would an expert analyst [or other authority figure] think about my forecast?

In the last lecture, I already went into the 1st question, which strikes at the heart of bad habits that keep us from getting new information that challenge us. In this lecture, I will go into the remaining questions with a bit more detail.

“Did I read the question / resolution criterion carefully?”

This innocent-sounding question is very important for performing well on actual forecasts. For example, it is very easy to see a question like “Will [team A] beat [team B] in [sport event X]?” and make the following errors:

  • You compute the probability that team B beats team A and subtract it from 100%, failing to accommodate the event of a tie, if it is possible under the rules.
  • You compute the probability that team A beats team B in a contest, but fail to account for the probability that team B is disqualified for some reason, so you end up underestimating, because the resolution criterion says that this counts as A beating B.
  • You fail to account for the probability that the event is canceled due to some unforeseen reason (such as a pandemic), so you end up overestimating.

Learning to do this with every prediction is a good way to improve your performance on artificial contests such as our forecasting contests and homework, simply because you will make fewer mistakes. More importantly, however, it is a way of getting into an inquisitive and skeptical frame of mind. This means you are more likely to be able to see probability events other people may not and slice up the problem in parts better. (in fact, I originally underestimated how important this would be and shelved it as a “good for contests only” thing, but Alex Lawson has successfully changed my mind!)

This skill is quite easy to practice; you can, for example, just make sure you ask yourself this question given every predictive question we have! (judging from the discussions on Metaculus, it seems skilled forecasters frequently do this) For a slightly different flavor of practice, you can try to think about how you would write a question in a prediction contest in a way that covers all bases. Let’s try it.

Q: Imagine you want to figure out something like “will the situation of COVID-19 be ‘bad’ by summer of 2022?” How would you phrase this question on a contest? (Spend 5 minutes brainstorming.)

  1. For starters, you need to define “situation” - would it be in terms of deaths? Mandates? Public statements? (for practice, if you have done a lot of COVID forecasting recently about say, deaths, you may want to try to define something in terms of public statements instead)
  2. Say you picked mandates. Would you define it in terms of mask mandates? Vaccine mandates? “Mask mandates and above” where “above” is well defined?”

If you put 5 honest minutes in this, you should notice that you got almost “for free” a clearer description of the situation than what you started with. What else did you discover from doing this activity?

“Would I bet [a nontrivial amount of money] on this?”

This is something that seasoned forecasters do so much automatically that they might not even think of it as a trick, but is something very unintuitive for most people who do not have a background in games of probability where real stakes can be lost. I think it is an effective question to ask yourself during predictions for two reasons:

  1. To be in a more focused state of mind. One of my favorite quotes is “A bet is a tax on bullshit” (see https://biatob.com/welcome for a relevant link; h/t Jean) from Alex Tabarrok. If you at least pretended you were betting money, you would be able to be more careful about your thinking processes. In mid February, it would have been easy to say things like “there’s definitely no way Putin will invade Ukraine” because you heard some journalist say “there is definitely no way Putin will invade Ukraine.” However, if you actually meant that, and Putin actually invaded Ukraine in the next week, would you have been willing to lose $5? $50? $500?
  2. To tap into your intuitions better. As touched a bit in “The Inner Game of Forecasting,” much of the feelings we have on subjects tap into inner intuitions and data in a way that’s not easily accessible by rational, “slow” thinking. Tabarrok gives a very good example (see 2:45 or so): there are frequently 50-50 situations where it might be hard to figure out which side is better, but asking yourself “if I wanted to bet money, which side would I bet on” seems to give a nontrivial amount of information. Especially for people who are not used to or practiced in reasoning about probability, this might be a way of extracting experience and models from your brain that you didn’t know you had.

As practice, try to come up with a question that you feel 50-50 on. Don’t think too hard about it.

Okay. Now, imagine that you are betting money (say $100, but if you are a high roller maybe go to $10000). Which side would you take? (just imagine yourself taking both sides, and pick the one that feels better for your money)

The interesting part here is to figure out what your brain was trying to tell you, so let’s do the third part of the exercise: try to ask yourself what your brain was actually saying with those instincts? Do those instincts make sense?

As for why this thing can work (just for transparency: for me, it doesn’t always work, but maybe 20% of the time it gives me something interesting), I think money taps into our emotions and intuition in a way that abstract thinking about “right” or “wrong” does not. So in line with the Inner Game of Forecasting, I think thinking about money is a way for some of us to more seriously listen to our intuitions.

While it is certainly not my intention to publicly tell you to gamble, it would be intellectually dishonest for me to leave out that if you actually gambled on something you would probably improve at forecasting even faster. As anyone who has played real money poker versus play money poker would tell you, the improvements from playing real money poker far exceeds what you would get from playing play money poker, and it is very hard to jump to high-stakes gambling without low-stakes gambling. We cannot and will not go this far in a classroom structure; instead, we do the best we can to give you some Skin in the Game by “betting” some of your grade on predictive questions – it is the safest way we see for you to realistically practice this skill.

“What would [person X] do?”

This trick is almost cliche, but it seems to be a very useful way of just getting us to take the perspective of another person with fairly high accuracy. While this is ideal if the other person is a skilled forecaster, as we have repeatedly said in this course, for a skill such as forecasting introducing diverse perspectives by itself is likely to improve your performance.

I will give a raw “real-time” example of me making a prediction:

  1. Say I am trying to predict “will Russia invade Ukraine during February 2022” defined as 2 major U.S. news outlets publicly stating that Russia has “invaded” Ukraine before February 28th, 2022, 11:59PM PT. (This hadn’t resolved yet when I initially wrote these notes on February 14th.)
  2. I have not been reading the news much recently, so I look around a bit with Google, Twitter, etc. and I see a full range of estimates, but on the high end due to recent people posting videos on Twitter. I find something about people saying Russia building a bridge that wasn’t there previously. This seems pretty costly. After about 5 minutes of this, I conclude that there’s probably about a 65% chance that Russia will invade.
  3. One of my favorite friends who is into forecasting and truth-seeking (let’s call him “B” for short) is pretty good at forecasting, at least relative to me (we have made many bets against each other in the past; I think he does slightly better). I am thinking “what would B do” and I instantly see a direction I didn’t consider: B likes to ask “what is the person gaining by doing the thing?” This reminds me of my own knowledge about North Korea’s cycle of escalating -> negotiations -> de-escalation as a way of obtaining resources, where nothing bad ever actually happens. I think this calls for me to decrease my estimate. I don’t see a principled way of decreasing, so I go to something like 45%, since this difference felt quite big.
  4. I have another friend, J, who is very intelligent and frequently disagrees with me since he just has a whole different set of models, so I really like thinking about things from his perspective. Asking “what would J do” gives “how costly is the action being done?” While I have already thought about the cost (see bridge example above), asking this question really shifts the focus to be on cost, and a quick google sees that Russia has moved a lot of troops (150,000) around Ukraine. I now move up a bit, to something like 55%.

Overall, I still think my forecasting so far is fairly naive, especially on such an important subject where I am lacking domain knowledge. However, relative to where I started before this exercise, I did polish my model a lot due to using this trick, and I am much more confident that I am starting to understand the situation a bit better. (to keep it simple, I do not include here my updates after 2/14/2022, when I first wrote this draft, even though we all got more information since then)

“Do I have a good reason to believe that I am more accurate than the crowd?”

Some people (myself included at times) tend to be contrarian in situations where we think the other people are wrong, probably at least partially because we like being “different but right.” However, this means we would be biased away from the truth in the contrarian direction, and this question is useful as a humbling anti-bias tool. Basically, it is saying that if the crowd is different from us, that is some Bayesian evidence that our priors are more likely to be wrong, so we probably want at least one very good reason (maybe a primary source that seems under-analyzed, or a perspective from a non-public expert) to conclude something drastically different.

For sake of time we will not practice this, but next time you make a bet that seems “out there” and everyone else seems to be thinking the same way, try to make sure you have at least one good explanation of why you might be more right than everyone else! (“I am smarter than everyone else in the room” is probably a terrible reason)

“What would an ‘expert analyst’ think about my forecast?”

This is similar to “what would X do?” and serves similar purposes, so I won’t go into too much detail. I think the main thing this one does is to put you in “outside view” which de-biases our own egos, and helps to find mistakes in our thinking, whereas “what would X do” seems to be better at finding new ideas instead of finding mistakes in our own ideas.

Consolidated Practice

As practice, let’s just do a single prediction, but trying to use these questions. We will use a Metaculus question as guide.

  1. The question to be predicted is “What is the probability that airplane travelers are free from mask mandates in the USA by 9/2022?”
  2. Before starting the prediction, let’s practice understanding the question / resolution criterion carefully. Can you list things you would want to look up / ask the giver of the question to clarify? Take a few minutes to think about things to clarify. Think hard about things that might be ambiguous, and try to guess what interpretations you might have for them.
  3. (see below for clarifications given by Metaculus. Make other clarifications of your choice if needed)
  4. Now, before doing more thinking, if you were to bet money on this, what direction would you take? What odds seem fair? (start by taking 50:50 odds. If it feels obvious you want to go one way, change the odds until the game seems even)
  5. Now, do your thinking, as always (say 5 minutes).
  6. Now, specifically ask yourself (if you have done so already, you can skip this step) “what would person X do?” to see if it generates more ideas. For example, you could ask yourself: “how would Jacob approach this question?”. And think for 5 more minutes.
  7. Finally, pretend you are giving your forecast to an expert mask mandate analyst (whatever that means). What do you think she’ll think about your forecast? Feel free to change it.
  8. Discuss with classmates (small groups or as a class): what is your final forecast? Did any of these questions generate new ideas? (in particular, what do you think backed up the intuitions you had in #4?) What were those ideas?
  9. If applicable, as a parting thought: if it seems your odds are very different from other people’s, do you think there was an “unfair” reason that justifies you being so different?

As reference for 2. the things clarified by the Metaculus question are:

  1. Resolves positively when government mask mandates are removed from ordinary airplane travelers (not e.g., pilots), AND at least 3 large carriers remove their mandates as well. Large carriers are those in the Group III, of which there are 18 currently.
  2. Mandates that are removed only for vaccinated people DO NOT count for the purpose of resolution. They must be removed for any normal traveler regardless of prior disease history and vaccination status.
  3. Mandates DO NOT have to be justified by COVID-19. It may be for any reason.

Conclusion

I’ve tried to put an overarching theme around these questions, but everything I came up with seemed fairly forced. The best I could come up with is “they are all related to cognitive biases,” so I hope that will serve as an advertisement for when we talk more about them.

(I thank Collin Burns for extensive help with this post. I thank Alex Lawson, Misha Yagudin and Linch Zhang for valuable conversations.)