- About
- Events
- Calendar
- Graduation Information
- Cornell Learning Machines Seminar
- Student Colloquium
- BOOM
- Spring 2025 Colloquium
- Conway-Walker Lecture Series
- Salton 2024 Lecture Series
- Seminars / Lectures
- Big Red Hacks
- Cornell University / Cornell Tech - High School Programming Workshop and Contest 2025
- Game Design Initiative
- CSMore: The Rising Sophomore Summer Program in Computer Science
- Explore CS Research
- ACSU Research Night
- Cornell Junior Theorists' Workshop 2024
- People
- Courses
- Research
- Undergraduate
- M Eng
- MS
- PhD
- Admissions
- Current Students
- Computer Science Graduate Office Hours
- Advising Guide for Research Students
- Business Card Policy
- Cornell Tech
- Curricular Practical Training
- A & B Exam Scheduling Guidelines
- Fellowship Opportunities
- Field of Computer Science Ph.D. Student Handbook
- Graduate TA Handbook
- Field A Exam Summary Form
- Graduate School Forms
- Instructor / TA Application
- Ph.D. Requirements
- Ph.D. Student Financial Support
- Special Committee Selection
- Travel Funding Opportunities
- Travel Reimbursement Guide
- The Outside Minor Requirement
- Robotics Ph. D. prgram
- Diversity and Inclusion
- Graduation Information
- CS Graduate Minor
- Outreach Opportunities
- Parental Accommodation Policy
- Special Masters
- Student Spotlights
- Contact PhD Office
Title: Accelerating the Data Flywheel for Contact-rich Manipulation via Model-based Reasoning
Abstract: The success of behavior cloning (BC) in robotic manipulation has largely been limited to “gripper-only” tasks, where training data can be reliably generated via end-effector teleoperation. In contrast, humans routinely leverage their entire hands and even body surfaces to perform contact-rich interactions—tasks that remain challenging for robots due to teleoperation difficulties and embodiment gaps. This talk introduces model-based planning as an effective data source for creating contact-rich robotic policies. First, we explore the structures of the complementarity-constrained optimization, which is ubiquitous in rigid body dynamics. By exploiting these structures, we can generate dexterous in-hand manipulation policies in minutes on a standard laptop using just the CPU. Notably, however, not all planning methods produce equally effective training data for BC. In the second part of the talk, we show that popular sampling-based planners can yield high-entropy demonstrations that adversely affect policy performance. To address this limitation, we propose building consistent, global planners by explicitly reasoning about optimality. We conclude by discussing how these insights pave the way for robust, contact-rich robotic behaviors, bridging the gap between purely gripper-centric tasks and human-level dexterity.
Bio: Tao Pang is a research scientist at the Robotics and AI Institute (formerly Boston Dynamics AI Institute). His research interests lie in the intersection of contact-rich planning and robot learning, with a focus on building robots with human-level dexterity. He received his PhD from the Massachusetts Institute of Technology, where his work on global planning for contact-rich manipulation earned an Honorable Mention for the IEEE T-RO King-Sun Fu Memorial Best Paper Award.