Yuan Tian
University of California, Los Angeles
Time: Friday, Nov. 15 12:30 PM - 1:30 PM Location: Online
Zoom Link: https://tennessee.zoom.us/j/84778963050
Towards Regulated Security and Privacy in Emerging Computing Platforms
Abstract:
Computing is undergoing a significant shift. First, the explosive growth of the Internet of Things (IoT) enables users to interact with computing systems and physical environments in novel ways through perceptual interfaces. Second, machine learning algorithms collect vast amounts of data and make critical decisions on new computing systems. While these trends bring unprecedented functionality, they also drastically increase the number of untrusted algorithms, implementations, interfaces, and the amount of private data they process, endangering user security and privacy. To regulate these security and privacy issues, regulations such as GDPR (General Data Protection Regulation) and CCPA (California Consumer Privacy Act) went into effect. However, a massive gap exists between the desired high-level security/privacy/ethical properties (from regulations, specifications, and users’ expectations) and low-level real implementations. To bridge the gap, my work aims to 1) change how platform architects design secure systems, 2) assist developers by detecting security and privacy violations of implementations, and 3) build usable and scalable privacy-preserving systems. In this talk, I will present how my group designs principled solutions to ensure the security and privacy of emerging computing platforms. I will introduce two developer tools we build to detect security and privacy violations with machine-learning-augmented analysis. Using the tools, we found large numbers of GDPR violations in web plugins and security property violations in IoT messaging protocol implementations. Additionally, I will discuss our recent work on scalable privacy-preserving machine learning, the first privacy-preserving machine learning framework for modern machine learning models and data with all operations on GPUs.
Bio:
Yuan Tian is an Associate Professor of Electrical and Computer Engineering, Computer Science, and the Institute for Technology, Law and Policy (ITLP) at the University of California, Los Angeles. She was an Assistant Professor at the University of Virginia, and she obtained her Ph.D. from Carnegie Mellon University in 2017. Her research interests involve security and privacy and their interactions with computer systems, machine learning, and human-computer interaction. Her current research focuses on developing new computing platforms with strong security and privacy features, particularly in the Internet of Things and machine learning. Her work has real-world impacts as countermeasures and design changes have been integrated into platforms (such as Android, Chrome, Azure, and iOS) and also impacted the security recommendations of standard organizations such as the Internet Engineering Task Force (IETF). She is a recipient of the Okawa Foundation Award 2022, Google Research Scholar Award 2021, Facebook Research Award 2021, NSF CAREER award 2020, NSF CRII award 2019, Amazon AI Faculty Fellowship 2019. Her research has appeared in top-tier venues in security, machine learning, and systems. Her projects have been covered by media outlets such as IEEE Spectrum, Forbes, Fortune, Wired, and Telegraph.
Li Xiong
Emory University
Time: Friday, Nov. 8 12:30 PM - 1:30 PM Location: MKB 622
Privacy in the Age of AI and Large Language Models: Personalized Privacy and Pretrained Models
Abstract:
As artificial intelligence (AI) and large language models (LLMs) increasingly influence every facet of our lives, ensuring the privacy of user data has become paramount. In this talk, I will review the common privacy attacks for extracting training data from a trained model and common defenses for training privacy-enhanced machine learning models using privacy sensitive data. I will then present our recent works and discuss open challenges related to two directions: 1) ensuring user-centered and personalized privacy for building privacy-enhanced models, and 2) new privacy attacks and defenses given the emerging fine-tuning paradigm using pre-trained LLMs.
Bio:
Li Xiong is a Samuel Candler Professor of Computer Science and Biomedical Informatics at Emory University. She has a Ph.D. from Georgia Institute of Technology, an MS from Johns Hopkins University, and a BS from the University of Science and Technology of China. Her research lab, Assured Information Management and Sharing (AIMS), conducts research on trustworthy and privacy-enhancing data-driven AI solutions for healthcare, public health, and spatial intelligence. She is recognized as an IEEE fellow for her contributions on privacy-preserving and secure data sharing and analytics. She has published over 200 papers and received seven best paper or runner up awards. She has served and serves as associate editor for TKDE, TDSC, VLDBJ, general or program chair for SIGSPATIAL, CIKM, BigData, and program vice chair for SIGMOD, VLDB, and ICDE. Her research has been supported by both governments (NSF, NIH, IARPA, AFOSR) and industry/foundations (Mistubishi, Cisco, AT&T, Google, IBM).
Yuan Hong
University of Connecticut
Time: Friday, Nov. 1 12:30 PM - 1:30 PM Location: MKB 622
Certifying Trustworthy Machine Learning: From Defenses to Attacks
Abstract:
In the past decade, adversarial attacks and defenses have been extensively studied to expose vulnerabilities and develop countermeasures for enhancing the robustness of machine learning models. This talk will present our recent advances in certifying both defenses and attacks, with a focus on moving from empirical approaches to provable guarantees. First, we will introduce Text-CRS, the first generalized certified robustness framework for language models against a wide range of word-level adversarial operations, including synonym substitution, word reordering, insertion, and deletion. By leveraging randomized smoothing in both permutation and embedding spaces, Text-CRS improves certified accuracy and robustness. Second, we will shift focus to the attack side by introducing certifiable black-box adversarial attacks. While certified defenses have been well studied, this is the first attack framework that provides provable guarantees for the attack success probability (ASP). It reveals critical weaknesses in machine learning models, even those protected by state-of-the-art defenses. Our attack framework constructs a continuous space of adversarial examples with lower-bounded (high) ASP. Finally, we will discuss the certification in other areas of trustworthy machine learning.
Bio:
Yuan Hong is an Associate Professor and Collins Aerospace Endowed Professor in the School of Computing at the University of Connecticut (UConn), where he directs the Data Security and Privacy (DataSec) Laboratory. His research spans security, privacy, and trustworthy machine learning, with a focus on areas such as differential privacy, secure computation, applied cryptography, adversarial attacks and provable defenses in machine learning, computer vision, (large) language models, and cyber-physical systems. His research works are prolifically published in top-tier conferences in Security (e.g., S&P, CCS, USENIX Security, NDSS) and Data Science (e.g., SIGMOD, VLDB, NeurIPS, CVPR, ECCV, EMNLP, KDD, AAAI), as well as in top interdisciplinary journals. He is a recipient of the NSF CAREER Award (2021), Cisco Research Award (2022, 2023), CCS Distinguished Paper Award (2024), and the finalist of the Meta Research Award (2021). He regularly serves on the technical program committee (PC) or as a Senior PC member for top security and data science conferences and is an Associate Editor for IEEE Transactions on Dependable and Secure Computing (TDSC) and Computers & Security.
Mi Zhang
The Ohio State University
Time: Friday, Oct. 18 12:30 PM - 1:30 PM Location: MKB 622
Building Efficient, Scalable, and Heterogeneous Federated Learning Systems
Abstract:
Data privacy has become a critical concern in modern AI systems. As a remedy to this concern, federated learning (FL) has emerged as a privacy-preserving machine learning paradigm where clients distributed at different geographical locations can collaboratively train an AI model while keeping their own data locally. While theoretical studies in federated learning have made significant progress, we are still confronted with challenges on building practical federated learning systems. In this talk, I will share our experiences in building efficient, scalable, and heterogeneous federated learning systems. First, I will present our work on developing an importance-sampling based FL framework that significantly enhances the training efficiency under the limited wireless network bandwidth without compromising the training quality. Second, I will focus on the client selection component of the FL pipeline and talk about our work on developing a data and system heterogeneity-aware client selection scheme that jointly enhances the efficiency and scalability of FL systems. Third, we present a simple yet powerful framework that enables model-heterogeneous FL, where models with different capacities can be trained on end systems with heterogeneous resources. This work lays the foundation for developing FL systems for training large-scale AI models such as large language models and foundation models in general. I will conclude the talk by briefly introducing our recent initiative on FedAIoT whose vision is to extend FL to much richer data modalities and compute devices encountered in the real world.
Bio:
Mi Zhang is an Associate Professor and the Director of AIoT and Machine Learning Systems Lab at The Ohio State University (OSU). He received his Ph.D. in Computer Engineering and M.S. in both Electrical Engineering and Computer Science from University of Southern California and B.S. in Electrical Engineering from Peking University. Before joining OSU, he was a tenured Associate Professor at Michigan State University and a postdoctoral scholar at Cornell University. His research lies at the intersection of systems and machine intelligence, spanning areas including edge AI, efficient AI, federated learning, multimodal large language models and generative AI, systems for machine learning, and human-centered AI for health and social good. Dr. Zhang has received a number of awards for his work. He is the 4th Place Winner (1st Place in U.S. and Canada) of 2019 Google MicroNet Challenge (CIFAR-100 Track), the Third Place Winner of 2017 NSF Hearables Challenge, and the champion of 2016 NIH Pill Image Recognition Challenge. He is the recipient of eight best paper awards and nominations. He is also the recipient of NSF CRII Award, Facebook Faculty Research Award, Amazon Machine Learning Research Award, MSU Innovation of the Year Award, and the inaugural USC ECE SIPI Distinguished Alumni Award in the Junior/Academia category.
Neil Gong
Duke University
Time: Friday, Oct. 4 12:30 PM - 1:30 PM Location: MKB 622
Secure Content Moderation for Generative AI
Abstract:
Generative AI–such as GPT-4 and DALL-E 3–raises many ethical and legal concerns such as the generation of harmful content, scaling disinformation and misinformation campaigns, as well as disrupting education and learning. Content moderation for generative AI aims to address these ethical and legal concerns via 1) preventing a generative AI model from synthesizing harmful content, and 2) detecting AI-generated content. Prevention is often implemented using safety filters, while detection is implemented by watermark. Both prevention and watermark-based detection have been recently widely deployed by industry. In this talk, we will discuss the security of existing prevention and watermark-based detection methods in adversarial settings.
Bio:
Neil Gong is an Associate Professor in the Department of Electrical and Computer Engineering and Department of Computer Science (secondary appointment) at Duke University. His research interests are cybersecurity and privacy with a recent focus on AI security. He received an NSF CAREER Award, Army Research Office Young Investigator Program (YIP) Award, Rising Star Award from the Association of Chinese Scholars in Computing, IBM Faculty Award, Facebook Research Award, and multiple best paper or best paper honorable mention awards. He received a B.E. from the University of Science and Technology of China in 2010 (with the highest honor) and a Ph.D in Computer Science from the University of California Berkeley in 2015.
Murat Kantarcioglu
Virginia Tech
Time: Friday, Sep. 20 12:30 PM - 1:30 PM Location: MKB 622
Defending and Defeating AI: Protecting the Good, Attacking the Bad for Privacy, Security and Fairness
Abstract:
AI models are increasingly being deployed for a wide range of critical tasks, from healthcare diagnosis to autonomous driving. However, recent research has revealed that these models are vulnerable to various attacks, including data poisoning and test-time evasion, which can severely compromise their effectiveness. In this talk, we will begin by exploring some of our current work aimed at enhancing the robustness of AI models by reducing the transferability of attacks and developing novel defense techniques in the context of Federated Learning. Additionally, we will discuss how blockchain-based incentive mechanisms can be employed to further mitigate potential attacks by fostering a more secure environment for AI deployment. Finally, we will discuss whether explainable AI based approaches could be used to rectify some of the AI errors. In the second part of the talk, we will shift focus to the offensive side, presenting our work on attacking AI models that may violate privacy or fairness. These proactive attacks are designed to expose and rectify flaws in AI systems, ensuring they are used in a way that protects individual privacy and promotes fairness.
Bio:
Dr. Murat Kantarcioglu is a Professor and CCI Faculty Fellow at the Virginia Tech Department of Computer Science. Before he joined Virginia Tech, he was an Asbhel Smith Professor of Computer Science at UT Dallas. He earned his PhD in Computer Science from Purdue University in 2005, where he was awarded the Purdue CERIAS Diamond Award for Academic Excellence. He also holds affiliations as a Faculty Associate at Harvard's Data Privacy Lab and as a Visiting Scholar at UC Berkeley's RISE Labs. Dr. Kantarcioglu's research centers on integrating cybersecurity, data science, and blockchain technologies to develop secure and efficient data processing and sharing mechanisms. His research has been supported by numerous grants from agencies such as NSF, AFOSR, ARO, ONR, NSA, and NIH. Dr. Kantarcioglu has authored over 180 peer-reviewed papers in top-tier venues including NDSS, CCS, USENIX Security, KDD, SIGMOD, ICDM, ICDE, PVLDB, and several IEEE/ACM Transactions. He has also served as Program Co-Chair for prestigious conferences such as IEEE ICDE, ACM SACMAT, IEEE Cloud, IEEE CNS, and ACM CODASPY. His research has been featured by media outlets such as the Boston Globe, ABC News, PBS/KERA, and DFW Television, and he has received multiple best paper awards. Dr. Kantarcioglu is the recipient of several notable awards, including the NSF CAREER Award, the AMIA 2014 Homer R. Warner Award, and the IEEE ISI 2017 Technical Achievement Award, jointly presented by the IEEE SMC and IEEE ITS societies, for his contributions to data security and privacy. He is also a Fellow of both the AAAS and IEEE.