- About
- Events
- Calendar
- Graduation Information
- Cornell Learning Machines Seminar
- Student Colloquium
- BOOM
- Fall 2024 Colloquium
- Conway-Walker Lecture Series
- Salton 2024 Lecture Series
- Seminars / Lectures
- Big Red Hacks
- Cornell University - High School Programming Contests 2024
- Game Design Initiative
- CSMore: The Rising Sophomore Summer Program in Computer Science
- Explore CS Research
- ACSU Research Night
- Cornell Junior Theorists' Workshop 2024
- People
- Courses
- Research
- Undergraduate
- M Eng
- MS
- PhD
- Admissions
- Current Students
- Computer Science Graduate Office Hours
- Advising Guide for Research Students
- Business Card Policy
- Cornell Tech
- Curricular Practical Training
- A & B Exam Scheduling Guidelines
- Fellowship Opportunities
- Field of Computer Science Ph.D. Student Handbook
- Graduate TA Handbook
- Field A Exam Summary Form
- Graduate School Forms
- Instructor / TA Application
- Ph.D. Requirements
- Ph.D. Student Financial Support
- Special Committee Selection
- Travel Funding Opportunities
- Travel Reimbursement Guide
- The Outside Minor Requirement
- Diversity and Inclusion
- Graduation Information
- CS Graduate Minor
- Outreach Opportunities
- Parental Accommodation Policy
- Special Masters
- Student Spotlights
- Contact PhD Office
Trust, Robots, and Trust Repair.
Abstract: Trust is a dynamic force, evolving over time. When a trustee excels, trust flourishes; conversely, a trustee's failures can erode trust. While managing trust increases is relatively straightforward, trust declines can have enduring consequences, souring collaborative endeavors. Though trust breakdowns are inevitable they can be mitigated through verbal repairs such as apologies, denials, explanations, and promises. However, it remains unclear whether these strategies effectively restore trust in robots. This gap in understanding hinders the development of resilient robots capable of gracefully recovering from inevitable failures, thus limiting the potential of human-robot collaboration. This presentation highlights and summarizes my recent work in trying to solve the problem of trust violations and determining trust repairs in the context of human--robot interaction.
Bio: Connor Esterwood is a Phd Candidate at the University of Michigan's School of Information. His research investigates the capacity of robots to repair and restore trust in the aftermath of trust violations. In addition, he also explores how individual differences such as personality and mind perception impact human--robot interaction. His work has appeared in leading journal sand conferences across the human--robot interaction and human--computer interaction space including publication in Nature Scientific Reports, Computers in Human Behavior (CHB), International Journal of Human--Computer Interaction (IJHCI), IEEE Robotics and Automation Letters (RA-L), the ACM/IEEE Conference on Human-Robot Interaction (HRI), the ACM Conference on Human Factors in Computing Systems (CHI), and, the IEEE Conference on Robot & Human Interactive Communication (RO-MAN). He is currently finishing his PhD program in the fall of 2024 and seeking job opportunities in academia for the fall of 2025.