[PDF]

Robot Navigation Commands via Augmented Reality Interfaces


Ioannis-Marios Stavropoulos

11/05/2023

Supervised by Juan Hernandez Vega; Moderated by George Theodorakopoulos

New advances in sensors and algorithms are allowing robots to navigate different types of environments (e.g., underwater, space, among others). To do so, robots generally receive a destination (goal), and then they plan collision-free paths to reach such a destination. An important aspect when commanding a robot to conduct such navigation tasks is the modality used to command the robot, i.e., the mechanism that a person employs to give a robot the destination.

The main goal of this project is to investigate the different modalities that could be used to give navigation commands to a robot via augmented reality (AR). Such modalities include gaze direction, speech and/or hand gestures, all of which are supported in modern AR devices like the Microsoft HoloLens (available for this project). This project is mainly focus on developing a simple but effective AR interface to provide navigation goals to a mobile robot (e.g., a service robot like the Pepper robot https://www.softbankrobotics.com/emea/en/pepper), but it will require using off-the-shelf robot navigation functionalities (clear information and working examples will be provided).

Prerequisites: - Experience in software development (Python or C#). - Experience with Linux is desirable. If not familiar with Linux, some working knowledge will be required along the project. - Interest in learning about social robots, and robotics in general. - Desirable experience in augmented/virtual reality development frameworks (particularly Unity).

References:

Hernández, J. D.et al. 2020. Increasing robot autonomy via motion planning and an augmented reality interface. IEEE Robotics and Automation Letters 5(2), pp. 1017-1023. https://orca.cardiff.ac.uk/138124/


Initial Plan (06/02/2023) [Zip Archive]

Final Report (11/05/2023) [Zip Archive]

Publication Form