
Kenneth Fleischmann
The University of Texas at Austin
Time: Friday, May. 2 12:30 PM - 1:30 PM Location: Online
Zoom Link: https://tennessee.zoom.us/j/85451157513
Smart Hand Tools: Trustworthy AI for Skilled Trade Workers
Abstract:
The public release of ChatGPT in 2022 has drawn attention to the role of AI in the future of knowledge work, but what are the implications of AI for the future of skilled trade work? The Smart Hand Tools project, a core research project of Good Systems: Ethical AI at UT Austin, seeks to leverage edge AI, sensors, and the internet of things to put the power of AI into workers’ hands. Through field research in collaboration with Austin Community College, the City of Austin, and Texas AFL-CIO, we have identified opportunities to improve skilled trade training and practice using smart hand tools. This user research informed our design, development, and deployment of prototype smart hand tools, including a smart rotary hand tool and an augmented reality welding simulator. The goals of smart hand tools are to leverage AI to empower skilled trade workers rather than threatening skilled trade workers’ jobs. A smart hand tool could provide guidance that could accelerate training and reduce the incidence of repetitive stress injuries and workplace accidents. It could also give workers control of data that they could use in situations such as filing a workers’ compensation claim or in collective bargaining. This talk will provide an overview of research to date and planned future research directions.
Bio:
Dr. Kenneth R. Fleischmann is a Professor in the School of Information at UT Austin. He is also the Founding Chair of Good Systems: Ethical AI at UT Austin, the Founding Director of Undergraduate Studies for the iSchool's B.A./B.S. in Informatics, and the Founding Editor-in-Chief of the ACM Journal on Responsible Computing. For twenty-five years, his research and teaching have focused on the ethics of AI and more broadly on the role of human values in the design and use of information technologies. His research has been funded by the National Science Foundation (NSF), MITRE, IARPA, Microsoft Research, Cisco Research Center, Micron Foundation, and the Public Interest Technology University Network. His research has been recognized by iConference Best Paper awards in 2012, 2021, and 2022; the ASIS&T Best Information Behavior Conference Paper Award in 2012 and 2022; the ASIS&T SIG-SI Social Informatics Best Paper Award in 2018; the ASIS&T SIG-AI Artificial Intelligence Best Paper Award in 2023; the Civic Futures Award for Designing for the 100% in 2019; and the MetroLab Innovation of the Month Award in July 2020 and October 2021.

Vitaly Pronskikh
Oak Ridge National Laboratory
Time: Friday, Apr. 11 12:30 PM - 1:30 PM Location: MKB 622
Beyond the Algorithm: Trustworthy AI as Shared Responsibility
Abstract:
As artificial intelligence (AI) becomes increasingly embedded in high-stakes domains like nuclear reactor control and radiation protection, questions of trust and accountability grow more pressing. Conventional approaches tend to equate trust with technical performance—accuracy, reliability, explainability—but in critical applications, this narrow view falls short. In this talk, I argue that AI trustworthiness must be re-conceptualized as a form of shared moral responsibility, emerging not from machine autonomy but from accountable entanglement across the human and institutional actors who shape AI systems. Drawing from responsibility philosophy, actor-network theory, and the theory-ladenness of observation, I show how AI systems are never neutral tools: they are built on layered models, situated assumptions, and evolving narratives, from design and simulation to deployment and public communication. Through case studies in radiation protection, nuclear medicine, and autonomous control in nuclear operations, I explore how theoretical assumptions, data annotations, and validation strategies become morally significant—especially when simulation infrastructures blur the boundary between empirical evidence and constructed behavior. I show that AI trustworthiness is not an intrinsic feature of the system, but emerges from how responsibility is distributed, institutionalized, and sustained within the broader sociotechnical context in which the system operates.
Bio:
Vitaly Pronskikh is a Neutronics Scientist at the Second Target Station of Oak Ridge National Laboratory (ORNL) and an Associate Member of the Center for Philosophy of Science at the University of Pittsburgh. He joined ORNL in 2023 after holding research appointments at Fermi National Accelerator Laboratory since 2010. His work spans nuclear and particle physics, radiation protection, and advanced computer simulations—fields that increasingly intersect with AI and questions of scientific methodology. Dr. Pronskikh’s interests include the epistemological and ethical dimensions of simulation-based science, as well as the historical and philosophical contexts in which scientific technologies evolve. He holds doctoral degrees in both Nuclear and Particle Physics and Philosophy of Science and Technology. He is a finalist in the 2024 Smoky Mountains Computational Sciences & Engineering Conference (SMCDC) Essay Contest on Trustworthy AI for Science.

Vitaly Shmatikov
Cornell University
Time: Friday, Apr. 4 12:30 PM - 1:30 PM Location: MKB 622
What You See Is Not What You Get: Multi-Modal AI Systems Are Not Secure
Abstract:
Modern AI/ML systems (in particular, semantic retrieval and LLM-based systems) accept not just text inputs but also images, audio, video, and other modalities. In this talk, I will show how attackers can exploit non-text vectors for spamming, misinformation, malicious code execution, and other adversarial objectives. I will also discuss why adversarial robustness seems difficult to achieve in multi-modal systems.
Bio:
Vitaly Shmatikov is a Professor of Computer Science at Cornell University and Cornell Tech. Research by Dr. Shmatikov, his students, and collaborators received the Caspar Bowden PET Award for Outstanding Research in Privacy Enhancing Technologies three times; Test-of-Time Awards from the IEEE Symposium on Security and Privacy (S&P / “Oakland”), ACM Conference on Computer and Communications Security (CCS), and the ACM/IEEE Symposium on Logic in Computer Science (LICS); as well as several outstanding and distinguished paper awards, most recently from USENIX Security 2021 and 2024 and EMNLP 2023.

Charlie Epperson
U.S. Coast Guard
Time: Friday, Mar. 7 12:30 PM - 1:30 PM Location: MKB 622
Bridging the Gap: AI R&D and Real-World Maritime Intelligence Application
Abstract:
The intersection of maritime intelligence and artificial intelligence presents transformative opportunities for enhancing maritime domain awareness. This presentation explores how the Coast Guard and partners have pursued AI capabilities, including computer vision, generative AI, and autonomous systems, to address critical maritime challenges such as counter-narcotics, migration, and search and rescue. CDR Epperson will detail ongoing interagency initiatives focused on developing organic AI capabilities and integrating cutting-edge commercial solutions. Furthermore, he will outline key engagement pathways for university AI researchers to contribute to the U.S. Government's AI efforts.
Bio:
Commander Charlie Epperson serves as the Deputy Chief of Artificial Intelligence within the U.S. Coast Guard's Office of Intelligence, where he leads efforts to integrate advanced AI technologies into maritime operations. Prior to this role, he was the Acting Director of Humanitarian Assistance and Disaster Relief (HADR) at the Department of Defense's Joint Artificial Intelligence Center (JAIC), spearheading initiatives to modernize disaster response through AI, including wildfire mitigation, rapid damage assessment, and enhanced search and rescue. With 22 years of operational experience, CDR Epperson has served in diverse and challenging environments, including a five-year assignment in Guam focusing on search and rescue, and deployments with the National Strike Force responding to the BP Deepwater Horizon oil spill and major hurricane response operations. CDR Epperson holds a Master of Public Administration (MPA) from the Lee Kuan Yew School of Public Policy at the National University of Singapore, a graduate certificate in Community Preparedness & Disaster Management (CPDM) from the University of North Carolina and completed the National Preparedness Leadership Initiative at Harvard University. He is a proud alumnus and Letterman of the University of Tennessee. Outside of his professional life, CDR Epperson and his wife are avid long-distance runners and dedicated Tennessee baseball fans.