I really enjoyed this week's video about AI safety! I found it interesting that Ben made the distinction between AI being framed as a partner and as a tool. I think it is super important that we maintain AI as a tool rather than a partner because using AI as a partner may lead to unpredictable results. This is because machine learning systems often produce outcomes that can't be succinctly explained. If we were to cede control to something as we would a partner, I think it is imperative that we know how that thing is making its decisions and that isn't the case with AI. A student in the crowd raised this concern as a reason to why AI may be hard to regulate and I agree. I believe that we need to create AI systems that are more transparent in their delivery of information. Before then regulation seems impossible to me.
One question I'd like answered is what has been done to make AI more transparent and traceable? I believe that the issue of AI's more blackbox kind nature is well understood but I'm curious to see if the problem can be solved, and if so, how? Maybe seeing what data is being used to generate each part of an AI's response would better inform regulators of how a particular AI is functioning.