Skip to main content

CS Colloquium

Spring 2025

Presented by the Computer Science Department
Mondays 12:00 - 12:50pm, Stevenson Hall 1300
All lectures are free and open to the public

Call for Participation Join the Mailing List Colloquium Archive

Resource Management and Query Scheduling in Big Data Management Systems

Shiva Jahangiri

Shiva Jahangiri
Santa Clara University

Stevenson 1300
Monday, February 10, 2025

In today’s dynamic data systems, managing resources effectively is key to maintaining performance and fairness. This talk will present techniques for allocating memory among queries arriving unpredictably, ensuring efficient use of resources. We’ll explore methods to balance fairness across queries with diverse demands and latency needs, as well as strategies for prioritizing and ordering queries from different classes based on priority values. These approaches address the critical challenges of real-time query management in modern systems.

Demystifying Vision Transformers: From Theory to Industry Insights

Abhishek Aich

Abhishek Aich
NEC Laboratories

Stevenson 1300
Monday, February 17, 2025

“But what are transformers? Why such a name? How do these neural networks work? Why are they everywhere right now? What propelled their widespread adoption?” In this talk, we will explore possible answers to these questions, prominently focused on computer vision (i.e. vision transformers). Next, we will delve into my current research topics at NEC Laboratories, America which includes enhancing the efficiency of vision transformers to optimize performance-computational cost trade-offs. We will also see (and appreciate) the gap between cutting-edge research and actual deployments. Finally, I will share some personal insights on doing a PhD and pursuing a research career in industry.

Revealing Hidden Stories: Co-Designing the Thámien Ohlone Augmented Reality Tour

Kai Lukoff

Kai Lukoff
Santa Clara University

Stevenson 1300
Monday, February 24, 2025

The Santa Clara University campus is adorned with symbols and monuments, including a Spanish Mission Church, that highlight its Catholic heritage. However, the presence and history of the Ohlone Native Americans, who have inhabited this land for thousands of years and continue to live in the region, receive little to no recognition. How can we utilize augmented reality (AR) to share these hidden stories?

In collaboration with the Muwekma Ohlone Tribe, our interdisciplinary team developed the Thámien Ohlone AR tour. This tour reveals hidden stories, encourages visitors to engage in critical reflection, and inspires visions of a more just future and received the Best Movie Award at CHI 2024, the leading conference in the field of human-computer interaction. This talk will share insights on co-designing location-based AR experiences for social impact and explore the potential of AR in preserving cultural heritage.

BIO
Kai Lukoff is an assistant professor in the Department of Computer Science & Engineering at Santa Clara University. He leads the Human-Computer Interaction Lab, focusing on technologies with social impact. His recent work focuses on co-design methods for location-based augmented reality. His research has been featured in prominent conferences such as CHI, CSCW, IMWUT, and DIS, and he was honored with the 2023 Outstanding Dissertation Award from ACM SIGCHI.

Discovering Bias in Large Language Models (LLMs)

Mehdi Bahrami

Mehdi Bahrami
Principal Researcher, Fujitsu Research

Stevenson 1300
Monday, March 10, 2025

The rapid proliferation of large language models (LLMs) has brought with it both opportunities and challenges. While LLMs and more broadly generative AI technologies are capable of providing excellent improvements for various routine and autonomous tasks thereby enabling cost and performance benefit, they are also prone to personal and societal harms such as biases, stereotypes, misinformation, and hallucinations to name a few. These ethical concerns have in turn triggered stakeholders across the world to call in for regulatory measures that ensure safe and beneficial use of generative AI technologies. In parallel, there are also research efforts to alleviate these issues through the development of generative AI bias detection and mitigating strategies. Towards advancing this goal, in this talk, we will explore the nature of bias in LLMs, highlight existing detection methods, and examine emerging techniques to mitigate bias in large-scale language models. Through a combination of theoretical insights and practical examples, we aim to advance the conversation around ethical AI deployment. It aims to offer strategies that can be adopted by developers, researchers, and policymakers to promote bias awareness, fairness and accountability in AI applications.

Exploring Metric Dimension on Random Graphs

Carter Tillquist

Carter Tillquist
CSU Chico

Stevenson 1300
Monday, March 24, 2025

The metric dimension of a graph G=(V,E) is the smallest number of nodes required to uniquely identify all nodes in G based on shortest path distances. This concept is closely related to trilateration, the idea underlying the Global Positioning System (GPS), and has applications in navigation and in generating embeddings for symbolic data analysis. In this talk, we discuss previous work and preliminary results related to the behavior of metric dimension in the context of Hamming graphs and several random graph models. Bounds on metric dimension and efficient heuristic algorithms for identifying close to optimal solutions are covered.

Advise-a-palooza for Fall 2025

Dept Event

Overlook (Student Center, 3rd floor)
Monday, April 7, 2025

CS students, join us for Advise-a-palooza for Fall 2025 registration.

Program Analysis for Securing C/C++ Code

Tapti Palit

Tapti Palit
UC Davis

Stevenson 1300
Monday, April 14, 2025

C and C++ remain two of the most widely used programming languages, powering everything from operating systems to critical infrastructure. However, their lack of built-in memory safety leaves applications vulnerable to exploitation, and memory corruption vulnerabilities cost the industry billions of dollars annually. To mitigate these risks, software defenses such as Control Flow Integrity (CFI) are deployed, but their effectiveness depends heavily on the precision of underlying program analysis.


In this talk, I will present my research on advancing program analysis techniques to improve software security. First, I will introduce the Invariant-Guided Pointer Analysis technique, which enhances the precision of CFI mechanisms by 59%, thus significantly improving its security guarantees. Then, I will discuss our lab's latest research on automatically transpiling C/C++ code into memory-safe languages, like Rust. Specifically, I will describe our hybrid approach, which combines Large Language Models (LLMs) with program analysis techniques to achieve high-accuracy C-to-Rust transpilation. Together, these efforts improve software security for legacy software and building a foundation for safer, more reliable software systems.

Confidence Code: Reinforcing the Trust Barrier in AI

Irfan Mirza
Director of Enterprise Resilience at Microsoft

Stevenson 1300
Monday, April 21, 2025

As artificial intelligence (AI) continues to evolve and permeate various aspects of industry and life, ensuring public confidence in the underlying technologies that result in AI is paramount. This discussion explores the critical role of responsible AI practices and the ethical citizenship that AI providers must adopt to foster and reinforce trust among users. With the anticipated growth of AI applications across industries, computer scientists and engineers will face an increasing burden of responsibility to create systems that prioritize fairness, accountability, and transparency.

We will discuss the current landscape of public perception regarding AI and the essential principles that are foundational to responsible AI development. As AI becomes more pervasive, the industry must engage diverse stakeholders and promote transparency to address concerns about bias and inequality. Furthermore, we must examine the benefits and challenges of regulatory frameworks in guiding ethical practices and enhancing public trust.

An outcome of this discussion is to highlight actionable strategies for AI providers to demonstrate their commitment to responsible citizenship. Ultimately, this discussion will outline a vision for the future, emphasizing that the path to widespread AI adoption hinges on the industry’s ability to uphold its ethical obligations and build lasting trust with the public.

Spring 2025 Short Presentations of Student Research and Awards

Dept Event

Stevenson 1300
Monday, April 28, 2025

Short presentations of research carried out by Sonoma State Computer Science Students, and CS awards.

Spring 2025 Presentations of Student Capstone Projects

Dept Event

Stevenson 1300
Monday, May 5, 2025

Short presentations of capstone projects carried out by Sonoma State Computer Science Students