Title: Autonomously Learning World-Model Representations For Efficient Robot Planning

Abstract: In recent years, it has been clear that planning is an essential tool for robots to achieve complex goals. However, robots often heavily rely on humans to provide "world models" that enable long-horizon planning. It is not only expensive to create such world models as it requires human experts who understand the domains as well as limitations of the robot, but these human-generated world models are often biased by human intuition and kinematic constraints. In this talk, I will present my research focusing on autonomously learning plannable world models. The talk would involve discussing approaches on task and motion planning, neuro-symbolic abstractions for motion planning, and how we can learn world models for task and motion planning.

Bio: Naman is a Postdoctoral researcher in Intelligent Robots Lab (IRL) with Prof. George Konidaris. He has completed his PhD from Arizona State University supervised by Prof. Siddharth Srivastava. His research interest lies in investigating methods for autonomously inventing generalizable and plannable world models for robotics tasks. He has been an intern with Palo Alto Research Center, Amazon Robotics, and Toyota Research Institute. Naman has also achieved several graduate fellowships at ASU and a Best Demo Paper Award at AAMAS 2022.