Knowing a little bit about Shneiderman's work, even in 2018 he was already a strong advocate for so-called human-centered AI, and talked about the need for systems that are transparent, accountable, and augment human abilities rather than replace them. At that time, many AI models—particularly deep neural networks—were increasingly seen as “black boxes” (producing impressive results in fields like image recognition and language processing but offering little insight into how decisions were made). This opacity, naturally, raised serious concerns in critical domains like healthcare, finance, and criminal justice, where understanding the rationale behind AI conclusions is essential for ethical and safe deployment. This seems particularly pertinent now when almost every sector is (sometimes blindly) excited about incorporating AI into their workflow. In the video, Shneiderman discusses something also echoed by a lot of folks in the HCI and ethics communities: AI systems that include audit trails or logs that could record decisions and facilitate post-hoc explanations and accountability. These ideas, at least at the time, were still considered secondary by a lot of mainstream AI researchers who prioritized performance metrics over interpretability.
By 2024–2025, it seems like (or at least I hope) that the field has shifted. The explosion of large language models and the widespread deployment of generative AI (like the GPT suite) should theoretically heighten interest in explainability and traceability. As AI is increasingly used in public-facing applications (e.g. automated medical assistance, legal drafting, educational tools) the demand for transparent decision-making processes to me seems more urgent than ever. Researchers and policymakers alike should be exploring AI system logs and chain-of-thought mechanisms to track how outputs are generated. Shneiderman’s earlier emphasis on building accountable logs and visualizations to make sense of AI reasoning should become a central concern, especially with the rise of conversations around regulation on AI safety and trustworthiness. Even with all of this consideration, large models are still pretty difficult to interpret, and though some strides have been made in developing explanation interfaces, full transparency, at least from what I know, is still elusive.