Title: Accelerating the Data Flywheel for Contact-rich Manipulation via Model-based Reasoning

Abstract: The success of behavior cloning (BC) in robotic manipulation has largely been limited to “gripper-only” tasks, where training data can be reliably generated via end-effector teleoperation. In contrast, humans routinely leverage their entire hands and even body surfaces to perform contact-rich interactions—tasks that remain challenging for robots due to teleoperation difficulties and embodiment gaps. This talk introduces model-based planning as an effective data source for creating contact-rich robotic policies. First, we explore the structures of the complementarity-constrained optimization, which is ubiquitous in rigid body dynamics. By exploiting these structures, we can generate dexterous in-hand manipulation policies in minutes on a standard laptop using just the CPU. Notably, however, not all planning methods produce equally effective training data for BC. In the second part of the talk, we show that popular sampling-based planners can yield high-entropy demonstrations that adversely affect policy performance. To address this limitation, we propose building consistent, global planners by explicitly reasoning about optimality. We conclude by discussing how these insights pave the way for robust, contact-rich robotic behaviors, bridging the gap between purely gripper-centric tasks and human-level dexterity.

Bio: Tao Pang is a research scientist at the Robotics and AI Institute (formerly Boston Dynamics AI Institute). His research interests lie in the intersection of contact-rich planning and robot learning, with a focus on building robots with human-level dexterity. He received his PhD from the Massachusetts Institute of Technology, where his work on global planning for contact-rich manipulation earned an Honorable Mention for the IEEE T-RO King-Sun Fu Memorial Best Paper Award.