- About
- Events
- Calendar
- Graduation Information
- Cornell Learning Machines Seminar
- Student Colloquium
- BOOM
- Spring 2025 Colloquium
- Conway-Walker Lecture Series
- Salton 2024 Lecture Series
- Seminars / Lectures
- Big Red Hacks
- Cornell University / Cornell Tech - High School Programming Workshop and Contest 2025
- Game Design Initiative
- CSMore: The Rising Sophomore Summer Program in Computer Science
- Explore CS Research
- ACSU Research Night
- Cornell Junior Theorists' Workshop 2024
- People
- Courses
- Research
- Undergraduate
- M Eng
- MS
- PhD
- Admissions
- Current Students
- Computer Science Graduate Office Hours
- Advising Guide for Research Students
- Business Card Policy
- Cornell Tech
- Curricular Practical Training
- A & B Exam Scheduling Guidelines
- Fellowship Opportunities
- Field of Computer Science Ph.D. Student Handbook
- Graduate TA Handbook
- Field A Exam Summary Form
- Graduate School Forms
- Instructor / TA Application
- Ph.D. Requirements
- Ph.D. Student Financial Support
- Special Committee Selection
- Travel Funding Opportunities
- Travel Reimbursement Guide
- The Outside Minor Requirement
- Diversity and Inclusion
- Graduation Information
- CS Graduate Minor
- Outreach Opportunities
- Parental Accommodation Policy
- Special Masters
- Student Spotlights
- Contact PhD Office
Katie Luo
Title:
Leveraging Auxiliary Information for Self-Driving Perception across Diverse End Users
Abstract:
While machine learning has shown great advances in a variety of fields such as computer vision, the current paradigm for self-driving trains the perception systems on specific environments but then deploys them to end-users into a diverse set of vehicle behavior and appearances. This change in environment makes it hard to guarantee high accuracy outside of the development laboratory. My work focuses on exploring additional channels of information to adapt to diverse, real-world scenarios in a data efficient manner. In this talk, I will go over the motivation for why we need to consider the different ways a user can deploy self driving, and explore the challenges associated with it. To address such challenges, I will discuss a few works proposing datasets, as well as solutions for adapting to these diverse cases of deployment. Finally, I will end the talk with future directions and considerations to bring self driving out of constrained settings and integrating with in-the-wild settings.
Bio:
Katie Luo is a Ph.D. student at Cornell University, advised by Prof. Kilian Q. Weinberger and Prof. Bharath Hariharan. Katie's research focuses on computer vision and perception for self-driving, with an end goal of bringing self-driving to a diverse set of end users. Prior to her Ph.D, Katie was an AI Resident at Uber ATG (now part of Aurora), and she received a B.Sc. and M.S. in Electrical Engineering and Computer Science from the University of California, Berkeley. She is also fortunate to have been supported by a Cornell University Fellowship, an Nvidia Graduate Student Fellowship, and an American Association of University Women (AAUW) Dissertation Fellowship Award.
Princewill Okoroafor
Title:
Breaking the T^{2/3} Barrier for Sequential Calibration
Abstract:
A set of probabilistic forecasts is calibrated if each prediction of the forecaster closely approximates the empirical distribution of outcomes on the subset of timesteps where that prediction was made. We study the fundamental problem of online calibrated forecasting of binary sequences, which was initially studied by Foster & Vohra (1998). They derived an algorithm with O(T^(2/3)) calibration error after T time steps, and showed a lower bound of Ω(T^(1/2)). These bounds remained stagnant for two decades, until Qiao & Valiant (2021) improved the lower bound to Ω(T^0.528) by introducing a combinatorial game called sign preservation and showing that lower bounds for this game imply lower bounds for calibration. In this paper, we give the first improvement to the O(T^(2/3)) upper bound on calibration error of Foster & Vohra. We do this by introducing a variant of Qiao & Valiant's game that we call sign preservation with reuse (SPR). We prove that the relationship between SPR and calibrated forecasting is bidirectional: not only do lower bounds for SPR translate into lower bounds for calibration, but algorithms for SPR also translate into new algorithms for calibrated forecasting. We then give an improved \emph{upper bound} for the SPR game, which implies, via our equivalence, a forecasting algorithm with calibration error O(T^(2/3−ε)) for some ε>0, improving Foster & Vohra's upper bound for the first time. Using similar ideas, we then prove a slightly stronger lower bound than that of Qiao & Valiant, namely Ω(T^0.54389). Our lower bound is obtained by an oblivious adversary, marking the first ω(T^(1/2)) calibration lower bound for oblivious adversaries.
Based on joint work with Yuval Dagan, Constantinos Daskalakis, Maxwell Fishelson, Noah Golowich and Robert Kleinberg.
Bio:
Princewill Okoroafor is currently completing his final year as a PhD student in the Computer Science Department at Cornell University, advised by Robert Kleinberg. Princewill is interested in theoretical aspects of machine learning. His current research centers around designing online and statistical learning algorithms that satisfy desirable fairness guarantees, such as calibration, and are robust to deviations in practice. Princewill obtained his Bachelor's degree from Harvey Mudd College, majoring in Computer Science and Mathematics. His research is currently funded by the Cornell CIS-Linkedin Fellowship.