- About
- Events
- Calendar
- Graduation Information
- Cornell Learning Machines Seminar
- Student Colloquium
- BOOM
- Spring 2025 Colloquium
- Conway-Walker Lecture Series
- Salton 2024 Lecture Series
- Seminars / Lectures
- Big Red Hacks
- Cornell University / Cornell Tech - High School Programming Workshop and Contest 2025
- Game Design Initiative
- CSMore: The Rising Sophomore Summer Program in Computer Science
- Explore CS Research
- ACSU Research Night
- Cornell Junior Theorists' Workshop 2024
- People
- Courses
- Research
- Undergraduate
- M Eng
- MS
- PhD
- Admissions
- Current Students
- Computer Science Graduate Office Hours
- Advising Guide for Research Students
- Business Card Policy
- Cornell Tech
- Curricular Practical Training
- A & B Exam Scheduling Guidelines
- Fellowship Opportunities
- Field of Computer Science Ph.D. Student Handbook
- Graduate TA Handbook
- Field A Exam Summary Form
- Graduate School Forms
- Instructor / TA Application
- Ph.D. Requirements
- Ph.D. Student Financial Support
- Special Committee Selection
- Travel Funding Opportunities
- Travel Reimbursement Guide
- The Outside Minor Requirement
- Diversity and Inclusion
- Graduation Information
- CS Graduate Minor
- Outreach Opportunities
- Parental Accommodation Policy
- Special Masters
- Student Spotlights
- Contact PhD Office
Abstract:
The main goal of this talk is to illustrate how machine learning can start to address some of the fundamental perceptual and control challenges involved in building intelligent robots. I’ll discuss how to learn dynamics models for planning and control, how to use imitation to efficiently learn deep policies directly from sensor data, and how policies can be parameterized with task-relevant structure. I’ll show how some of these ideas have been applied to a new high speed autonomous “AutoRally” platform built at Georgia Tech and an off-road racing task that requires impressive sensing, speed, and agility to complete. Along the way, I’ll show how theoretical insights from reinforcement learning, imitation learning, and online learning help us to overcome practical challenges involved in learning on real-world platforms. I will conclude by discussing ongoing work in my lab related to machine learning for robotics.
Bio:
Byron Boots is an Assistant Professor in the School of Interactive Computing at the Georgia Institute of Technology. He holds a secondary appointment in the School of Electrical and Computer Engineering at Georgia Tech and is Visiting Faculty at Nvidia Research. He received his M.S. and Ph.D. in Machine Learning from Carnegie Mellon University and was a postdoctoral scholar in Computer Science and Engineering at the University of Washington. Byron directs the Georgia Tech Robot Learning Lab, affiliated with the Center for Machine Learning and the Institute for Robotics and Intelligent Machines. His lab conducts research on theory and systems that tightly integrate perception, learning, and control. He has received several awards including Best Paper at ICML, Best Paper at AISTATS, Best Paper Finalist at ICRA, Best Systems Paper Finalist at RSS, and the NSF CAREER award.