I really enjoyed this week's video about AI safety! I found it interesting that Ben made the distinction between AI being framed as a partner and as a tool. I think it is super important that we maintain AI as a tool rather than a partner because using AI as a partner may lead to unpredictable results. This is because machine learning systems often produce outcomes that can't be succinctly explained. If we were to cede control to something as we would a partner, I think it is imperative that we know how that thing is making its decisions and that isn't the case with AI. A student in the crowd raised this concern as a reason to why AI may be hard to regulate and I agree. I believe that we need to create AI systems that are more transparent in their delivery of information. Before then regulation seems impossible to me.
One question I'd like answered is what has been done to make AI more transparent and traceable? I believe that the issue of AI's more blackbox kind nature is well understood but I'm curious to see if the problem can be solved, and if so, how? Maybe seeing what data is being used to generate each part of an AI's response would better inform regulators of how a particular AI is functioning.
I similarly wrote about the focus on language in the interview and the distinction of AI as a tool vs a partner. I agree that this distinction is an important one to make, and that viewing AI as a partner instead of a tool could be dangerous and unwise. I also agree that transparency in AI systems is important, and that this can then lead to better regulation. When AI systems are not as transparent, it seems that it is also more difficult to understand the potential dangers of their capabilities, which then makes regulation a very difficult task. Hopefully, as the capabilities and complexity of AI systems continues to grow, we can ensure both adequate transparency and appropriate regulation occur.