That is an interesting and valid comment. I too was curious at this, but I guess I was confused by how would one measure how uncertain is the "right" amount of uncertainty about a particular model especially ex-ante. These calculations likely occur ex-post that denigrates experts for being too in love with their models thus operating from a bias with downward pressure on the level of uncertainty. I would think, to be honest, that you want experts to be more confident about their model but ideally, that confidence should come because when they are making certain predictions they are also releasing a detailed report that transparently documents how all relevant sources of uncertainty have been accounted for when the final level of uncertainty (variance) for any prediction is articulated. If someone for example says, I believe next month the GDP of the country will be 4% more than it was during the same month last year, but states that with supreme confidence as if fact, then we should likely label such an expert as one that we might be more suspicious of, but if an expert claims the GDP range should be 3.8 to 4.2% with 95% confidence as the variation will occur from these three particular factors, then we can be much more positive regarding this expert.