top of page
To see this working, head to your live site.
Forum
Welcome! Have a look around and join the discussions.
A place to talk about Economic Modeling, Behavioral Economics, Corporations & how these affect Wealth.
Whether you're studying at Harvard or online, please feel free to add posts that don't fit in other categories here!
A place to talk about the Future of the Future pathway, especially about AI and the evolution of modern predictive systems.
Here's a spot where you can add thoughts about what you'd like added to the Prediction Project in the Future!
New Posts
- Thoughts from LearnersI was interested in Prof. Gilbert’s discussion of how people internalize uncertainty and accuracy estimations. In his interview, he notes that individuals are typically unmotivated by statistics such as “a certain accuracy estimation improved by x%,” and are instead more impacted by behavior changes in others. He uses the example of recycling to argue that even people who were viscerally opposed to the notion of climate change now find themselves using blue bins simply because they observe others doing the same. The discussion reminded me of a common problem in modern natural language processing systems, where researchers still have trouble finding ways to convey uncertainty estimations to users of their programs. For example, if a chatbot on an online shopping platform could distinguish between requests to return and to exchange a product with 70% certainty, how would the company determine whether it is worth integrating into their site? Though it is not high, there is still a chance the bot could mess up the NLU and lead to angry customers, who were expecting a refund but instead got an exchange. I wonder if, instead, there might be a better way to convey uncertainty making use of the “herd behavior” mentality that Gilbert discusses. What if, say, there were a score generated that conveyed the number of companies who actually trust this chatbot (to use the same example) and have had a good experience? If the chatbot company were transparent about other users’ interaction with its platform, I wonder if new users would have a better time understanding the uncertainty involved?Like
- The Future of the FutureThis week I watched Professor Goodman's interview of Ned Hall. What I found most interesting about the talk was Professor Goodman and Ned's conversation about rules and theories. From what I gleaned rules are more tailored to specific situations and do not involve concepts that can't be directly observed or ones that need separate explanation. Theories on the other hand can often have exceptions since they are more general. In addition, Ned and Professor Goodman highlighted that theories can invoke concepts that can't be directly observed or explained in closed manner. I never really put much thought into the distinction between the two concepts and it was interesting to see Alyssa and Ned demonstrate the difficulty with defining rules, as they couldn't even provide a succinct rule that could be used to delineate between rules and theories. One question I wish Alyssa asked Ned was if he knew of any situations where making the distinction between rule and theory was especially important. I somewhat get the distinctions at a high level but I am unclear about their utility. For that reason I'd love to hear more examples of how the distinction provided greater clarity or insight. In addition, I would love to know Ned's thoughts on AI potentially becoming more accurate without necessarily becoming any more transparent. https://www.labxchange.org/library/pathway/lx-pathway:53ffe9d1-bc3b-4730-abb3-d95f5ab5f954/items/lx-pb:53ffe9d1-bc3b-4730-abb3-d95f5ab5f954:lx_simulation:8bf7271d?source=%2Flibrary%2Fclusters%2Flx-cluster%3AModernPredictionLike
- SpaceAs someone who didn't know a lot about space, I thought the fact that we looked across a massive frequency range to look for extraterrestrial life was really interesting. This of course could lead to a larger amount of uncertainty, which was also discussed, which I also thought was interesting. I wonder how one can quantify this uncertainty - we have discussed the importance of thinking a lot about uncertainty when making predictions, but I think it is important to be able to quantify uncertainty in our thinking, so I would ask Jill Tarter how to quantify the uncertainty when we are searching for extraterrestrial life. I also thought her opinion that we will never be completely reliant on robot technology to explore space and search for extraterrestrial life was really interesting, because one would think that as technology evolves, the things we do in terms of exploring space ourselves would also decrease. I would ask Jill Tarter how she thinks the evolution of AI will change this. It also brings up interesting thoughts about how computation/AI/technology increases uncertainty, in my opinion, when exploring!Like
bottom of page