P3: Object detection within buildings (IoT + Applied Computer Vision + AI)

Selim Celik


Supervised by Charith Perera; Moderated by Frank C Langbein

Service robots currently face a significant challenge in accurately and efficiently perceiving their surroundings which is critical for autonomous tasks such as navigation, mapping, and object manipulation. The challenge arises as the robot has to perceive the world through various sensors, including cameras, lidars, and radars. The robot has to integrate all these different sources of information to create a 3D map of the environment, identify objects, and make decisions. This project focuses on researching and exploring robotic capabilities for object detection and other visual perception techniques in a resource-constrained hardware, with the aim of gaining a deeper understanding of cluttered indoor environments. The project aims to develop a prototype system, with an emphasis on research and exploration rather than delivering the perfect solution. The project also aims to contribute to the broader community by updating and refining existing resources available on platforms such as GitHub, and by creating Docker containers for easy deployment and adaptation by users with similar hardware setups. The primary goal is to investigate these techniques on resource-constrained hardware, specifically the iRobot Create 2, Jetson Xavier NX development kit, and use a monocular camera as the primary sensing modality for visual perception.

Initial Plan (06/02/2023) [Zip Archive]

Final Report (19/05/2023) [Zip Archive]

Publication Form