AbuSniff: An automated social network abuse detection system
Southern Illinois University
Social networks like Facebook provide functionality that can expose users to abuse perpetrated by their contacts. For instance, Facebook users can often access sensitive profile information and timeline posts of their friends, and also post abuse on the timeline and news feed of their friends. In this talk, we introduce AbuSniff, a system to identify Facebook friends perceived to be abusive or strangers, and protect the user by restricting the access to information for such friends. We develop a questionnaire to detect perceived strangers and friend abuse. We train supervised learning algorithms to predict questionnaire responses using features extracted from the mutual activities with Facebook friends. In our experiments, participants recruited from a crowdsourcing site agreed with 78% of the defense actions suggested by AbuSniff, without having to answer any questions about their friends. When compared to a control app, AbuSniff significantly increased the willingness of participants to take a defensive action against friends. AbuSniff also increased the participant's self-reported willingness to reject friend invitations from strangers and abusers, their awareness of friend abuse implications and their perceived protection from friend abuse.
Professional Senior-Level Software Development
Senior Software Engineer
Hulu - The Walt Disney Company
Healthy software development teams often hold individuals who have significant experience to higher expectations than entry-level or "junior" contributors. Usually, organizations mark this differentiated scope and responsibility with a title like "Senior Software Developer." When considering a career in software development, it's natural to focus on those immediate concerns around becoming a successful entry-level developer. However, an understanding of those "senior" expectations and practices that developers will encounter in the medium and long-term is invaluable for new professionals looking to bring their own career plans into focus.
So, how do Senior Software Developers impact the products they build? What strategies might Senior Developers use to empower their teams to be more effective? I'll discuss patterns I've noticed in the 17 years I’ve worked as a software developer in the consumer electronics domain. Drawing on my experiences contributing to large-scale products and services such as Xbox, HoloLens, Sonos, and Hulu, I will share some examples of impactful Senior-level deliverables. Audiences will gain a clearer understanding of how professional software development works on large teams through this survey of some of the novel ways individual contributors can make positive team-wide contributions.
Automotive Software Architecture and Unreal Engine for HMI
Joe Andresen ('08)
Technical Product Manager - HMI
In this talk I will cover general software architecture for human machine interfaces (HMI) in cars and how Unreal Engine not only fits into this architecture, but how it is bringing together teams and organizations within Car companies to build better UI/UX experiences.
Lessons from Tech Transfer at Microsoft Research
As a basic industrial research lab, Microsoft Research expects its members to both publish basic research and put it into practice. Unfortunately, moving from a validated technique or model in a published paper to a state where that same technique is being used by and providing value to software development projects on a regular basis in a consistent and timely fashion is a time consuming, fraught, and difficult task. We have attempted to make this transition, which we call "Tech Transfer", many times in the empirical software engineering group (ESE) at Microsoft Research. Much like research in general, there have been both triumphs and setbacks, but each experience has provided valuable insight and informed our next effort. This talk shares our experiences from successes and failures and provides lessons and guidance that can be used by others trying to transfer their ideas into practice in both industrial and academic contexts.
How do we know if data science is “for good”?
Human Rights Data Analysis Group
We interact with the outputs from quantitative models multiple times a day. As methods from statistics, machine learning, and artificial intelligence become more ubiquitous, so too do calls to ensure that these methods are used “for good” or at the very least, ethically. But how do we know if we are achieving “good”? This question will frame a presentation of case studies from the Human Rights Data Analysis Group (HRDAG), a Bay Area nonprofit that uses data science to analyze patterns of violence. Examples will include collaborations with US-based organizations investigating police misconduct and partnerships with international truth commissions and war crimes prosecutors. HRDAG projects will be used to illustrate challenges of real-world data, including incomplete and unrepresentative samples, and adversarial political and/or legal climates. The potential harm that can be done when inappropriately analyzing and interpreting incomplete and imperfect data will be especially highlighted, including questions such as: How can we develop approaches to help us identify the cases where analytical tools can do the most good, and avoid or mitigate the most harm? We propose starting with two simple questions: What is the cost of being wrong? And who bears that cost?