Multi-camera intelligent space for a robust, fast and easy deployment of proactive robots in complex and dynamic environments

One of the current challenges in robotics is the integration of robots in everyday environments. Successes in autonomous mobile robotics have been so far restricted to well defined and narrow application scenarios, where boundary conditions are known a priori. Nevertheless, personal and professional service robots are expected to perform complex tasks in dynamic, unstructured and unknown environments, without human supervision. It is difficult to achieve this with stand-alone robots that use only the information provided by their own sensors (on-board sensors). To solve this issue, the robotics community has proposed to build of intelligent spaces, i.e., spaces where many sensors and intelligent devices are distributed and which provide information to the robot. In this thesis, we will build an intelligent space that allows an easy, fast, and robust deployment of robots in different environments in a short period of time. This space will consist of a distributed network of intelligent cameras and autonomous robots. The cameras will detect situations that might require the presence of the robots, and support their movement towards these situations. With this proposal, our robots are able to react to events that occur anywhere. Therefore, they will react to the needs of the users regardless of where the users are. This will look as if our robots are more intelligent, useful, and have more initiative. The intelligence needed to achieve a robust performance of the system can be distributed amongst all the agents, cameras and robots in our case. In this thesis, we will explore two alternatives: collective intelligence and centralised intelligence. Under the collective intelligence paradigm, intelligence is fairly distributed among agents. Global intelligence arises from individual interactions without the intervention of a central agent, similarly to self-organization processes that occur in nature. We assume that robots can operate in a priori unknown environments when their behaviour emerges from the interaction amongst an ensemble of independent agents (cameras), that any user can place in different locations of the environment. These agents, “initially identical”, will observe human and robot behaviour, learn in parallel, adapt and specialize in the control of the robots. To this extent, our cameras need to be able to detect and track robots and humans robustly, under challenging conditions: changing illumination, crowded environments, etc. Moreover, they must be able to discover their camera neighbours, and guide the robot navigation through routes of these cameras. Meanwhile, the robots must only follow the instructions of the cameras and negotiate obstacles in order to avoid collisions. On the other hand, under the centralised intelligence paradigm, one type of agent will be assigned much more intelligence than the rest. This agent will make most decision making, and its performance will have the highest importance. In this case, the role of central agent will be played by the robot agent, and most intelligence will be devoted to the task of self-localisation and navigation. Our robots will implement multi-sensor localisation strategies that fuse the information of complementary sources, in order to achieve a robust robot behaviour in dynamic and unstructured environments. Among other information sources, our robots will integrate the information received from the camera network. Our proposal is a generic solution that can be applied to many different service robot applications. In this thesis, we have integrated our intelligent space with a general purpose guide robot that we have developed in the past, as an specific example of application. This robot is aimed to operate in different social environments, such as museums, or conferences.

keywords: