Research

Robotics is often defined as the “intelligent connection of perception to action” (M. Bradley, “Artificial Intelligence and Robotics”, MIT, 1984 [Link]).

Motion Planning and Control for High-Speed, Vision-Based Quadrotor Flight

Quadrotors are very agile, yet simple aerial vehicles, and recent work showed they can execute extremely complex maneuvers. Most of this work relies on motion-capture systems for state estimation, preventing those machines from exploiting their potentials in the real world. Conversely, I am interested in executing agile flight with quadrotors using solely onboard sensing (namely, a single camera and an IMU) and computing.

The advantages of a motion-capture systems over onboard vision are that the state estimate is always available, at high frequency, accurate to the millimeter, and with almost constant noise covariance within the tracking volume. Conversely, a state estimate from onboard vision can be intermittent; furthermore, its covariance increases quadratically with the distance from the scene and is strongly affected by the type of structure and texture of the scene, as well as by the motion of the robot (e.g., motion blur). Therefore, to execute a complex aggressive maneuver, while using only onboard sensing, it becomes necessary to couple perception with the trajectory generation process (i.e., active vision).

This leads to a number of interesting challenges and open questions, since sensing and control cannot be treated as two separated problems. On the contrary, it is necessary to create synergy between perception and action (i.e., motion planning and control) by coupling them. This is especially true with inherently unstable, under-actuated robotic platforms as quadrotors.