CS Colloquium
Spring 2025
Presented by the Computer Science Department
Mondays 12:00 - 12:50pm, Stevenson Hall 1300
All lectures are free and open to the public
Call for Participation Join the Mailing List Colloquium Archive
AI, Ethics, and Psychiatry

Tuomas Vesterinen
Stanford University
Stevenson 1300
Monday, September 29, 2025
Researchers in psychiatry increasingly believe that artificial intelligence (AI) can uncover the complex pathological processes of mental disorders. They hope this will advance precision psychiatry through personalized diagnoses and treatments. However, current AI research tends to adopt a biologically focused view of disorders, which overlooks how psychiatric conditions are shaped by social and cultural factors, and how their classifications are laden with values. I argue that if these factors are not addressed, they may lead to unintended ethical and social implications due to the specific nature of AI systems. AI raises concerns regarding the human-AI therapeutic relationship, ambiguity of responsibility, and epistemic injustices, particularly the issue of trusting AI more than service users. Additionally, the influence of datafication risks neglecting uncommon symptoms and experiences of mental disorders, resulting in an oversimplified understanding of these conditions. To mitigate these challenges, I argue for a shift from narrowly focusing on precision to a socially just approach to AI. This approach should involve all stakeholders in evaluating the ethical and social implications of AI in psychiatry, emphasizing value-sensitive design and domain relativity of classifications.
An Interactive Workshop on Value Sensitive Algorithm Design

Sarah Thornton
Stevenson 1300
Monday, October 6, 2025
The choices we make in our algorithms reverberate across society in many dimensions, changing our expectations of mobility, safety, employment and other aspects of life we value. These major societal changes will, in turn, be the result of a number of small engineering decisions that, when aggregated, determine the system behavior. In order for our technology to evolve in ways we desire, we must bridge the gap between individual engineering decisions and the societal impacts they create. This workshop discusses some of the challenges faced by engineers and proposes a value-centered approach, engaging stakeholders early in the process, identifying values and tensions that should be resolved.
Program Analysis for Securing C/C++ Code

Tapti Palit
UC Davis
Stevenson 1300
Monday, October 13, 2025
C and C++ remain two of the most widely used programming languages, powering everything from operating systems to critical infrastructure. However, their lack of built-in memory safety leaves applications vulnerable to exploitation, and memory corruption vulnerabilities cost the industry billions of dollars annually. To mitigate these risks, software defenses such as Control Flow Integrity (CFI) are deployed, but their effectiveness depends heavily on the precision of underlying program analysis.
In this talk, I will present my research on advancing program analysis techniques to improve software security. First, I will introduce the Invariant-Guided Pointer Analysis technique, which enhances the precision of CFI mechanisms by 59%, thus significantly improving its security guarantees. Then, I will discuss our lab's latest research on automatically transpiling C/C++ code into memory-safe languages, like Rust. Specifically, I will describe our hybrid approach, which combines Large Language Models (LLMs) with program analysis techniques to achieve high-accuracy C-to-Rust transpilation. Together, these efforts improve software security for legacy software and building a foundation for safer, more reliable software systems.
How Kant’s Ethics can shape AI Alignment

Oluwaseun Sanwoolu
University of Kansas
Stevenson 1300
Monday, October 27, 2025
What would it mean to align AI systems with Kant’s moral theory if they can’t be moral agents like humans? I argue that it is still possible. The first challenge is that Kant’s ideas are built for moral agents, but in the view I defend, AI can still follow his basic test: create a rule for action and ask if it would work for everyone in such a situation. This gives us a way to design and check AI decision-making using the Formula of Universal Law.
The second challenge is that Kant’s approach can seem too rigid to handle different situations. I show that his framework can adapt to context-sensitivity through practical judgment. While AI lacks human judgment, it can use a functionally similar mechanism such as transformer models to factor in important details before acting. This means Kant’s ideas can still guide AI design in a way that is both consistent and flexible.
Advise-a-palooza for Spring 2026
Dept Event
Overlook (Student Center, 3rd floor)
Monday, November 3, 2025
CS students, join us for Advise-a-palooza for Fall 2025 registration.
Project STORM, Sociotechnical Operations Risk Management--Military Ethics in the World of AI

John P. Sullins III
Sonoma State University
Stevenson 1300
Monday, November 10, 2025
Sociotechnical risks are a reality of all technology design, and one that particularly matters in an organization like the Department of Defense. We will look at a two-year project housed here at SSU where SSU faculty and students collaborated with Cal Poly SLO faculty and students to build a prototype application for helping DoD projects identify how best to utilize the responsible AI toolkit and NIST Framework as these applied to their particular projects. We will also examine how LLMs present new problems for military AI applications.
Fall 2025 Short Presentations of Student Research and Awards
Dept Event
Stevenson 1300
Monday, November 24, 2025
Short presentations of research carried out by Sonoma State Computer Science Students, and CS awards.
Fall 2025 Presentations of Student Capstone Projects
Dept Event
Stevenson 1300
Monday, December 1, 2025
Short presentations of capstone projects carried out by Sonoma State Computer Science Students