I truly enjoyed this interview. Ben's insights were truly remarkable. The most prominent and recurring theme was language. As an economist who is deeply invested in understanding the future of innovation and the implications of that on job growth and distribution, this particular care towards language is of huge significance for economist and politicians communicating how their policies are helping workers who are rightly highly insecure about the age of the robots taking their jobs. The emphasis that they are our partners and that machine knows x or y or that it generates the ideas only reinforces this anxiety. I think thus this focus on language is of high significance. Additionally, I found it immensely interesting in understanding his approach to account for the various levels of certainty needed in different applications of this particular 'tool.' Here, he mentioned that for things that are novel (such as giving a promo for a particular item like 'Target knows you are pregnant but this was not something that was given before) then you probably need to have a much higher certainty even complete certainty. However, for items where humans are currently the agents of prediction such as doctors then what matters is who takes responsibility for failure even if on paper it seems like the computer is performing better (he was happy to accept the machine as a tool of competition if the company that designed it will take the liability of the tool failing). Finally, I am immensely curious about the idea of white-boxing ML. My personal experience playing around with ML in Fall 2019 when I took a data science course was marvel at the predictive capabilities but sheer shock in the black-box approach.