Grasping Regions through Learning and Computer Vision Techniques

Hebe Wrench


Supervised by Juan Hernandez Vega; Moderated by Stuart M Allen

Grasping objects is one of the basic and most common tasks that a robot (such as manipulator arms or mobile manipulators) has to do. While it is considered a simple task for humans, sometimes it could be a challenging robot operation. For a robot, grasping an object normally involves perception to detect the object, planning to approach and grasp the object, and control to drive the robot through the planned motion. With the increasing popularity of learning techniques, new frameworks/libraries have been proposed to automatically detect the best grasping poses of an object.

This project aims to implement a learning and computer vision -based framework to determine the grasp affordances of an object. More specifically, the framework should use computer vision to detect and learn the best grasping poses of an object that might be surrounded by obstacles in cluttered settings. The expected output of the proposed framework is one or more grasping regions around the object or, alternatively, a set of grasping poses.

The student will need to research about existing works in grasping poses detection. The framework will be fully developed over the robot operating system (ROS). The project is limited to simulation results.

Prerequisites: - Experience in software development (Python or C/C++). - Experience with Linux is desirable. If not familiar with Linux, it would be necessary to learn while doing the project. - Interest in learning about robotics in general.

Initial Plan (06/02/2023) [Zip Archive]

Final Report (19/05/2023) [Zip Archive]

Publication Form