Autonomous Robotic Assembly
As part of my Autodesk Research internship in Summer 2023, I helped develop a flexible, multi-robot workcell for autonomously assembling skateboard trucks—components that require precise alignment and force-controlled insertion of several interdependent parts. The system integrates six-axis manipulators with tool changers, force/torque sensors, and in-hand cameras. A deep-learning–driven perception stack was designed to handle part localization, reorientation, and insertion without human intervention. In the completed workflow, certain parts were still retrieved from predefined kit locations, while others were grasped using this vision-guided pipeline. Once localized, a manipulator would grip and—when necessary—reorient a part with in-hand vision feedback, then hand it off to a second robot for force-controlled insertion. By coupling perception outputs with real-time force/torque feedback, the workcell dynamically adjusted trajectories to prevent jamming and maintain sub-millimeter alignment. My primary responsibility focused on object-pose prediction using in-hand camera data. I implemented a multi-view fusion pipeline that registered sequential in-hand images against 3D CAD templates via template matching, aiming for accurate 6 DoF localization of irregularly shaped components under variable lighting and minor occlusions. Although the pipeline did not reach production-grade robustness due to challenges with the accuracy of real-time registration, it yielded valuable insights into the limitations of template matching in constrained fields of view and informed subsequent improvements to our sensor-fusion strategies. Autodesk Research Team: Yotto Koga, Hui Li, Xiang Zhang, Yunsheng Tian, Adam Arnold, James Emerick, Nic Carey, Srinidhi Srinivas, Nick Cote, Michael Koehle, Stefanie Pender, Noa Kaplan, Annabella Macaluso, Gadiel Sznaier Camps, Özgüç Çapunaman, Gabrielle Patin, Hans Kellner, Sachin Chitta