Skip to main content

Time: Friday, November. 7 12:30 PM - 1:30 PM      Location: MKB 622

Zoom Link: https://tennessee.zoom.us/s/85616105317

Is Cybersecurity Dead in the Next 12-18 Months?

Abstract:

Technology and cybersecurity is undergoing a fundamental transformation as AI agents transition from experimental tools to operational implementations inside of companies, governments, and other organizations. Within the next 12-18 months, organizations will confront unprecedented challenges that require rethinking core security primitives developed over decades. This presentation addresses different inflection points that will define AI security in the near term. We will cover the agentic identity crisis, the democratization of cyber capability, AI coding agents, agents as the new insider threat, and the refactoring of security economics. AI agents don't fit existing identity models. They're not service accounts—they learn and adapt. They're not user accounts—they operate at superhuman speed and can spawn copies of themselves. Organizations deploying AI agents that work autonomously for hours or days, maintain persistent memory, collaborate via Webex, Slack, Teams, or other tools, and need access to sensitive systems face an urgent question: what is the identity primitive for an AI agent? Those who solve this first gain enormous competitive advantage; those who don't will either cripple their AI deployments with excessive restrictions or create vulnerabilities that make traditional breaches look quaint. Recent demonstrations from DARPA's AI Cyber Challenge and Google's Big Sleep, Code Mender, and other projects show autonomous systems finding and patching vulnerabilities with minimal human intervention. Within 18 months, moderately motivated actors with modest API budgets will execute attacks that once required nation-state resources. The traditional hierarchy of cyber capability is collapsing into a flatter structure where third-tier actors operate like second-tier actors, and first-tier actors will achieve capabilities we can barely imagine. Several industry estimates suggest that more than half of new code will be AI-generated within 18 months inside of leading tech companies. We face a crisis in the software development lifecycle. Traditional "shift-left" security is insufficient when AI coding assistants can introduce subtle vulnerabilities at scale, faster than human review can catch them. Cisco's Project CodeGuard addresses this challenge by providing real-time security analysis of AI-generated code, establishing guardrails that prevent vulnerable code from entering the development pipeline, and creating a new paradigm for continuous AI-driven security remediation integrated directly into the IDE. We will discuss how these new capabilities will change the software development lifecycle and the role of security in it. Virtual collaborators, digital twins, and other agentic implementations with persistent memory, working on multiple projects simultaneously, will have legitimate access to sensitive systems and perform actions that would seem suspicious for humans but are normal for AI. This is the insider risk problem with AI agents as the potential insider threat. Solutions require sophisticated detection systems that understand intent, not just actions—establishing baselines for agent behavior, detecting deviations from stated objectives, identifying prompt injection attempts, and responding in seconds. When engineers can build custom security tools in a weekend using AI assistance, the traditional build-versus-buy model inverts. The enterprise security market will compress dramatically as value shifts from vendors to organizations that effectively deploy AI for their own security needs. Winners will possess unique data, provide genuinely hard-to-build infrastructure, or transform into AI-augmented service providers.

Bio:

Omar Santos is a Distinguished Engineer at Cisco focusing on artificial intelligence (AI) security, cybersecurity research, incident response, and vulnerability disclosure. He is the co-chair of the Coalition for Secure AI (CoSAI) and a board member of the OASIS Open standards organization. Omar is also the chair of the OpenEoX and the Common Security Advisory Framework (CSAF) technical committee. His work led the creation of the CSAF ISO standard. Omar's collaborative efforts extend to numerous organizations, including the Forum of Incident Response and Security Teams (FIRST) and the Industry Consortium for Advancement of Security on the Internet (ICASI). Omar is the co-chair of the FIRST PSIRT Special Interest Group (SIG) and was the lead of the DEF CON Red Team Village for several years. Omar is the author of over 25 books, 21 video courses, and over 50 academic research papers. Omar is a renowned expert in ethical hacking, vulnerability research, incident response, and AI security. Omar's work in cybersecurity is also recognized through multiple granted patents. Prior to Cisco, Omar served in the United States Marines focusing on the deployment, testing, and maintenance of Command, Control, Communications, Computer, and Intelligence (C4I) system.


Somesh Jha

Somesh Jha
University of Wisconsin

Time: Friday, October. 24 12:30 PM - 1:30 PM      Location: MKB 622

Zoom Link: https://tennessee.zoom.us/j/81554281291

Safety of AI through the lens of Security and Cryptography

Abstract:

AI techniques are being used in diverse domains where trustworthiness is a concern, including automotive systems, finance, healthcare, natural language processing, and malware detection. Of particular concern is the use of AI algorithms in cyber-physical systems (CPS), such as self-driving cars and aviation, where an adversary can cause serious consequences. Interest in this area of research has simply exploded. In this work, we will emphasize the need for a security and cryptography mindset in trustworthy machine learning, and then cover some lessons learned.

Bio:

Somesh Jha received his B.Tech from Indian Institute of Technology, New Delhi in Electrical Engineering. He received his Ph.D. in Computer Science from Carnegie Mellon University under the supervision of Prof. Edmund Clarke (a Turing award winner). Currently, Somesh Jha is the Lubar Professor in the Computer Sciences Department at the University of Wisconsin (Madison). His work focuses on analysis of security protocols, survivability analysis, intrusion detection, formal methods for security, and analyzing malicious code. Recently, he has focused on trustworthy ML. Somesh Jha has published several articles in highly-refereed conferences and prominent journals. He has won numerous best-paper and distinguished-paper awards. Prof. Jha received the CAV award for his work on CEGAR, and also has received the IIT-Delhi Distinguished Alumni award. Prof. Jha is the fellow of the ACM, IEEE, and AAAS.

Jenny Davis

Jenny Davis
Vanderbilt University

Time: Friday, Sept. 11 12:30 PM - 1:30 PM      Location: MKB 622

After Algorithmic Fairness: The Myth of Neutrality and Power of Repair

Abstract:

Abstract: The field of algorithmic ethics is substantial and growing, working to mitigate harms and realize social good. The fairness paradigm dominates this field across AI, machine learning, and other data-driven domains. Algorithmic fairness aims to a) undercut human biases by replacing subjective assessments with 'objective' computation and b) eliminate biases in data and data-derived outputs. Despite significant investment from academia, industry, and government, algorithmic fairness has failed to live up to its promise. Algorithmic harms propagate and persist while social inequities amplify and embed. This talk presents algorithmic reparation as an alternate proposal. Drawing on a paper, collaborative workshop, special issue, and especially, aforthcoming book, the talk delineates a reparative paradigm for algorithmic futures. This begins with a critique of fairness as a viable value standard, making the case for a shift toward redress. This shift is supported by a tripartite framework of algorithmic reparation and its implementation across diverse uses-cases, along with careful consideration of the obstacles and inroads to reparative praxis.

Bio:

Jenny L. Davis is the Gertrude Conaway Vanderbilt Chair and Professor of Sociology at Vanderbilt University, Honorary Professor of Sociology at The Australian National University, and Non-Resident Fellow at the Center for Democracy and Technology.