Colloquium Archive

Predictable Failures: Secrets From A Former IT Provider and Current HIPAA& Cybersecurity Consultant

Amy Wood
CEO & Risk Mitigation Specialist
Copper Penny Consulting LLC

10/10/2023

This talk will dive into the world of HIPAA Data Breaches and how IT and cyber experts are either a best asset or worst liability to a healthcare practice.  Using case studies of real data breaches and experience as a local IT provider, attendees will get a clear picture of what the landscape looks like as they find careers and become the experts themselves.

Robots That Care: Socially Assistive Robotics and the Future of Work and Care

Maja Mataric
South Pasadena, CA, United States

10/17/2023

The nexus of advances in robotics, NLU, and machine learning has created opportunities for personalized robots. The current pandemic has both caused and exposed unprecedented levels of health & wellness, education, and training needs worldwide.   Socially assistive robotics has the potential to address those and longer-standing care needs through personalized and affordable support that complements human care.
 
This talk will discuss human-robot interaction methods for socially assistive robotics that utilize multi-modal interaction data and expressive and persuasive robot behavior to monitor, coach, and motivate users to engage in health, wellness, education  and training activities. Methods and results will be presented that include modeling, learning, and personalizing user motivation, engagement, and coaching of healthy children and adults, stroke patients, Alzheimer's patients, and children with autism spectrum disorders, in short and long-term (month+) deployments in schools, therapy centers, and homes. Research and commercial implications and pathways will be discussed.

Ethical Implications of Robot Inner Speech

John Sullins
Professor of Philosophy
Sonoma State University

10/24/2023

In this talk we will explore work done in collaboration between SSU and The Robot Lab at the University of Palermo to explore the use of robot inner speech and developing better ethical reasoning in human robot teams. Robot inner dialog is different from human inner dialog in that it is an additional process that is added for the user's benefit, not so much for the help of the machine that is doing the reasoning.  Here we will explain the robot's inner dialog and how it has been used to build systems that display more conscious and trustworthy actions. Whereas machine morality seeks to give systems rudimentary moral reasoning capabilities, Artificial Phronesis aims to build machines that are more highly skilled in producing actions that are not only acceptable but wise and nuanced.  Such machines need to be conscious of their situation and the subtle variables that go into skillfully navigating the moral terrain they find themselves.  We provide a way of testing for the perceived presence of aspects of artificial phronesis between the human user and the robot.  We find that the robot's inner speech helps the user generate skilled practical moral reasoning during the experiment.

Responsible AI

Ricardo Baeza-Yates
Director of Research
Institute for Experiential AI at Northeastern University, Silicon Valley campus

10/31/2023

In the first part we cover five current specific problems that motivate the needs of responsible AI: (1) discrimination (e.g., facial recognition, justice, sharing economy, language models); (2) phrenology (e.g., biometric based predictions); (3) unfair digital commerce (e.g., exposure and popularity bias); (4) stupid models (e.g., minimal adversarial AI) and (5) indiscriminate use of computing resources (e.g., large language models). These examples do have a personal bias but set the context for the second part where we address four challenges: (1) too many principles (e.g., principles vs. techniques), (2) cultural differences; (3) regulation and (4) our cognitive biases. We finish discussing what we can do to address these challenges in the near future to be able to develop responsible AI, particularly under the umbrella of the incoming regulation (European Union's AI Act & White House's Blueprint for an AI Bill of Rights)

How Private is Your Data Analysis?

Sara Krehbiel
Assistant Professor, Department of Mathematics and Computer Science
Santa Clara University

11/07/2023

Our personal data is collected for many purposes. How can we impose safeguards on data analysis to make sure it doesn't inadvertently leak individual data? Differential privacy is a mathematical guarantee that even if data analysis reveals rich information about a population, there is little it can reveal about an individual, even if an adversarial party knows almost everything about the dataset. But how much can a more realistic adversary learn? We explore how this depends both on what the adversary already knows and on the algorithms used to achieve differential privacy in the first place.

Pages