Compiler optimizations and software engineering techniques for improving application launch time
Google, Santa Clara, CA
Launch time of applications has been a concern for a very long time. It has become more important with mobile apps as startup time often regresses with feature additions. For many mobile applications, launch time tests users' patience. While devices are becoming faster, developers find creative ways to regress startup time by adding features or unnecessary software engineering.
Optimizing for launch time is a little different than optimizing purely for code size or for performance. Some code size optimizations help startup and so do many redundancy elimination optimizations. In this presentation I'll discuss program instrumentation techniques to get insights into application launch time. I'll share measurement techniques to get insights into different parts of application startup. I'll talk about compiler optimizations that help application launch time. I'll also present other methodologies to improve startup time like order file generation, removing static initializers etc.
Using computer science based approaches to determine forest structure from remote sensing data
Associate Professor, Department of Biology
Sonoma State University
In this talk, I will discuss my current research project, 3DForests, which aims to evaluate the use of remote sensing techniques to rapidly and more accurately estimate aboveground biomass (AGB) for a range of tree species and estimate crucial fuels parameters to help validate or refine fuel treatment tools and fire behavior models across diverse California forests. Specifically, we use state-of-the-art terrestrial laser scanning (TLS) combined with modern data processing techniques, to acquire detailed measurements of 3D forest structure in coastal and southern Cascade forests of northern California. Importantly, I will focus on data analysis approaches of these 3D data based in computer science, including quantitative structure
Embattling for a Deep Fake Dystopia
Hermosa Beach, CA, United States
Recent advances in the democratization of AI have been enabling the widespread use of generative models, causing the exponential rise of fake content. Nudification of over 680.000 women by a social bot, impersonation scams worth millions of dollars, or spreading political misinformation through synthetic politicians are just the footfall of the deep fake dystopia.
As every technology is simultaneously built with its counterpart to neutralize it, this is the perfect time to fortify our eyes with deep fake detectors. Deep fakes depend on photorealism to disable our natural detectors: we cannot simply look at a video to decide if it is real. On the other hand, this realism is not preserved in physiological, biological, and physical signals of deep fakes, yet. In this talk, I will begin with presenting our renowned FakeCatcher, which detects synthetic content in portrait videos using heart beats, as a preventive solution for the emerging threat of deep fakes. Detectors blindly utilizing deep learning are not as effective in catching fake content, as generative models keep producing formidably realistic results. My key assertion follows that such signals hidden in portrait videos can be used as an implicit descriptor of authenticity, like a generalizable watermark of humans, because they are neither spatially nor temporally preserved in deep fakes. Building robust and accurate deep detectors by exhaustively analyzing heartbeats, PPG signals, eye vergence, and gaze movements of deep fake actors reinforce our perception of reality.
Moreover, we also innovate novel models to detect the source generator of any deep fake by exploiting its heart beats to unveil residuals of different generative models. Achieving leading results over both existing datasets and our recently introduced in-the-wild dataset justifies our approaches and pioneers a new dimension in deep fake research.
From Campus to Codebase: The Journey from Student to Software Engineer
The Cloud on Your Computer, or Bringing the Power of Virtualization to Your Laptop/Desktop
Visiting Professor, Computer Science Department
Sonoma State University
These days Cloud Services go way beyond online storage services. Major players like Amazon, Microsoft, and Google offer a smorgasbord of online Cloud Services ranging from simple cloud storage solutions to software as a service, hardware as a service, and even whole computing infrastructures as a service. In this talk, I will demonstrate how all of the above can be achieved even on any half-way modern laptop. It will be limited in performance but not in the scope of services provisioned. This can be achieved entirely with free software. The talk will provide a step-by-step life demonstration of creating a complete computing infrastructure consisting of different operating systems and servers.
Predictable Failures: Secrets From A Former IT Provider and Current HIPAA& Cybersecurity Consultant
CEO & Risk Mitigation Specialist
Copper Penny Consulting LLC
This talk will dive into the world of HIPAA Data Breaches and how IT and cyber experts are either a best asset or worst liability to a healthcare practice. Using case studies of real data breaches and experience as a local IT provider, attendees will get a clear picture of what the landscape looks like as they find careers and become the experts themselves.
Robots That Care: Socially Assistive Robotics and the Future of Work and Care
South Pasadena, CA, United States
The nexus of advances in robotics, NLU, and machine learning has created opportunities for personalized robots. The current pandemic has both caused and exposed unprecedented levels of health & wellness, education, and training needs worldwide. Socially assistive robotics has the potential to address those and longer-standing care needs through personalized and affordable support that complements human care.
This talk will discuss human-robot interaction methods for socially assistive robotics that utilize multi-modal interaction data and expressive and persuasive robot behavior to monitor, coach, and motivate users to engage in health, wellness, education and training activities. Methods and results will be presented that include modeling, learning, and personalizing user motivation, engagement, and coaching of healthy children and adults, stroke patients, Alzheimer's patients, and children with autism spectrum disorders, in short and long-term (month+) deployments in schools, therapy centers, and homes. Research and commercial implications and pathways will be discussed.
Ethical Implications of Robot Inner Speech
Professor of Philosophy
Sonoma State University
In this talk we will explore work done in collaboration between SSU and The Robot Lab at the University of Palermo to explore the use of robot inner speech and developing better ethical reasoning in human robot teams. Robot inner dialog is different from human inner dialog in that it is an additional process that is added for the user's benefit, not so much for the help of the machine that is doing the reasoning. Here we will explain the robot's inner dialog and how it has been used to build systems that display more conscious and trustworthy actions. Whereas machine morality seeks to give systems rudimentary moral reasoning capabilities, Artificial Phronesis aims to build machines that are more highly skilled in producing actions that are not only acceptable but wise and nuanced. Such machines need to be conscious of their situation and the subtle variables that go into skillfully navigating the moral terrain they find themselves. We provide a way of testing for the perceived presence of aspects of artificial phronesis between the human user and the robot. We find that the robot's inner speech helps the user generate skilled practical moral reasoning during the experiment.
Director of Research
Institute for Experiential AI at Northeastern University, Silicon Valley campus
In the first part we cover five current specific problems that motivate the needs of responsible AI: (1) discrimination (e.g., facial recognition, justice, sharing economy, language models); (2) phrenology (e.g., biometric based predictions); (3) unfair digital commerce (e.g., exposure and popularity bias); (4) stupid models (e.g., minimal adversarial AI) and (5) indiscriminate use of computing resources (e.g., large language models). These examples do have a personal bias but set the context for the second part where we address four challenges: (1) too many principles (e.g., principles vs. techniques), (2) cultural differences; (3) regulation and (4) our cognitive biases. We finish discussing what we can do to address these challenges in the near future to be able to develop responsible AI, particularly under the umbrella of the incoming regulation (European Union's AI Act & White House's Blueprint for an AI Bill of Rights)
How Private is Your Data Analysis?
Assistant Professor, Department of Mathematics and Computer Science
Santa Clara University
Our personal data is collected for many purposes. How can we impose safeguards on data analysis to make sure it doesn't inadvertently leak individual data? Differential privacy is a mathematical guarantee that even if data analysis reveals rich information about a population, there is little it can reveal about an individual, even if an adversarial party knows almost everything about the dataset. But how much can a more realistic adversary learn? We explore how this depends both on what the adversary already knows and on the algorithms used to achieve differential privacy in the first place.
The computer programs of Charles Babbage
Fred D. Gibson, Jr. Endowed Professor in Science
University of Nevada, Reno
The mathematician and inventor Charles Babbage drafted 26 code fragments between 1836 and 1840 for his unfinished “Analytical Engine.” The programs were embedded implicitly in tables representing execution traces. In this talk, we explore the programming architecture of Babbage’s mechanical computer, that is, its structure from the point of view of a programmer, based on those 26 coding examples preserved in the Babbage Papers Archive. I will also show the world's "first computer program". The programs illustrate how Babbage intended to build the Analytical Engine and its capabilities.
Bio: Dr. Raúl Rojas is a professor of Statistics in the Dept. of Mathematics and Statistics at UNR. Previously, he was a professor of Mathematics and Computer Science at the Universities of Berlin, Vienna, and Halle-Wittenberg. His field of research is the theory and applications of AI. He has written three books about the history of computing. For his research in this area, especially the reconstruction of historical machines, he received the Tony Sale Award from the British Computer Society in 2015 and the Wolfgang von Kempelen Prize from the Austrian Computer Society in 2005.
Fall 2023 Short Presentations of Student Research and Awards
Short presentations of research carried out by Sonoma State Computer Science Students, and CS awards.
- Jacob Jaffee, "DFA Approximations of Non-Regular Languages"
- Brandon Dale, "Modeling AI Fairness by Equity"
Fall 2023 Presentations of Student Capstone Projects
Short presentations of capstone projects carried out by Sonoma State Computer Science Students.