I was thinking of some of the not-so-mathematical lessons we learn from Bayes' Rule. I'd like to include actual examples for each, and how it maps onto the various pieces of Bayes' Rule, but I figured I'd put up my list here and add to it as I think of it.
- Confidence in a claim should scale with the evidence for that claim
- Ockham's razor - simpler theories are preferred (i.e. you pay a marginalization penalty for each parameter, across its prior)
- Simpler means fewer adjustable parameters
- Simpler means that the predictions are both specific and not overly plastic. For example, a hypothesis which is consistent with the observed data, and also be consistent if the data were the opposite as well would be overly plastic. Arguing for the God hypothesis, saying that a universe fine tuned for life is evidence for design is a hypothesis which is overly plastic. If our universe were not fine tuned for life, and life is exceptional, then that too would be evidence for design - thus the data, and its opposite, are covered by the hypothesis.
- Your inference is only as good as the hypotheses that you consider. If you consider only randomness and psychic, then nearly every octopus will be psychic.
- Extraordinary claims require extraordinary evidence.
- It is better to explicitly display your assumptions rather than implicitly hold them.
- It is a good thing to update your beliefs when you receive new information, and not a sign of waffling.
- Not all uncertainties are the same.
Any other lessons we learn?