I was interested in Prof. Gilbert’s discussion of how people internalize uncertainty and accuracy estimations. In his interview, he notes that individuals are typically unmotivated by statistics such as “a certain accuracy estimation improved by x%,” and are instead more impacted by behavior changes in others. He uses the example of recycling to argue that even people who were viscerally opposed to the notion of climate change now find themselves using blue bins simply because they observe others doing the same. The discussion reminded me of a common problem in modern natural language processing systems, where researchers still have trouble finding ways to convey uncertainty estimations to users of their programs. For example, if a chatbot on an online shopping platform could distinguish between requests to return and to exchange a product with 70% certainty, how would the company determine whether it is worth integrating into their site? Though it is not high, there is still a chance the bot could mess up the NLU and lead to angry customers, who were expecting a refund but instead got an exchange. I wonder if, instead, there might be a better way to convey uncertainty making use of the “herd behavior” mentality that Gilbert discusses. What if, say, there were a score generated that conveyed the number of companies who actually trust this chatbot (to use the same example) and have had a good experience? If the chatbot company were transparent about other users’ interaction with its platform, I wonder if new users would have a better time understanding the uncertainty involved?
top of page
bottom of page