The most memorable piece of information I learned from Dr. Goodman’s interview with Avi Loeb was his conclusion that mere collection of data is not sufficient to obtain a full understanding of the nature of the world. I was really intrigued by the anecdote he shared about his visit to Chichén Itzá — and the idea that while Mayan astronomers may have been prolific observers of the natural world, this did not make them scientists as we would today consider the term, as they only used this data to practice astrology rather than deriving astronomical and physical principles and making predictions as their successors in the field would go on to do. This challenges a concept I’ve heard a lot regarding large language models — that by creating neural networks that imitate the human brain, we can in effect understand the human mind. But are observations of human-like mental models sufficient to truly understand the nature of the brain?
If I were to pose an additional question, I would ask on what basis he derived his prediction that humans (and other biological creatures) will replace their physical bodies with “things that are much more durable” such as robots. I don’t necessarily consider this prediction outside the realm of possibility, but I believe understanding how Dr. Loeb arrived at it would further our understanding of the nature of biological predictions, which, as has been noted throughout the class, take an altogether different course than predictions of physics. One could never “test” the theory that biological creatures tend to replace their physical forms with robotic ones as one can test a law of physics, but still, I would be curious to learn more about Dr. Loeb’s theorizing process as it pertains to this specific assertion.
Hi;
I find the connection that you made to LLMs to be really interesting. Yet so much of AI still operates in a black box, despite continuous efforts to make it more transparent (e.g. , explainable AI or XAI). I can't help but wonder, even if we had a neural network that was a perfect imitation of the human brain, would be even be able to understand it? Or would we simply wind up with two black boxes instead of clarity on this?
Anyway, great job!