Something that surprised me from the Dan Kammen interview was the idea that those who design predictive forecasts are on average twice as confident or half as uncertain as they should be. While it makes sense to me that those who designed a forecast are most invested in it, which might lead them to have more confidence in it than others, I'm left curious of how this overconfidence actually manifests empirically as uncertainty calculations are being made. Those who are making these forecasts are likely experts in their fields, so how do they end up calculating uncertainty so incorrectly? Do they account for all possible sources of uncertainty? Do they underestimate the amount of uncertainty that a given factor may cause? I would assume that the model maker's unconscious biases would cause them to more easily dismiss or underestimate uncertainty, leading to this phenomenon, however I'm still surprised that it has such a significant effect.