I did have several questions for Ben. I understand that he felt quite confident about his 100-year prediction that we will look back at these silly humans at the start of the 21st century who thought the AI will become smarter than us and will be better than us in every way because by then they will truly understand that AI is just a tool and will come to celebrate and appreciate the many differences that are there between the two agents. However, on what grounds is this prediction being made? Is he uncertain that if there is one particular field in AI that is met with an exogenous shock that transforms that industry and solves their critical problem in advancing the ML tools to become more human-like? What if new forms of legal innovation (similar to how the patent system encouraged innovation) fuels a new industry of novel AI applications that enable new forms of responsibility-sharing for such items even if the company that initially designed it does not feel comfortable taking the entirety of the risk? One must also wonder how the job of a pilot over time has become less necessary for the functioning of the plane even with incredibly high complexity and frequency and stakes of the project at hand? Are a lot more jobs going to occur where the tool is supposed to most of the job and the human is there just to oversee what warning labels are occurring?