The goal of this project is to make possible the guidance of a mobile robot through a visual interface. The robot will be equipped with two cameras to acquire images from its environment, which will be shown in a control screen. To guide the robot, the operator will need only to indicate, on the observed image, the place where the robot should go. All the navigation aspects, such as controlling the velocity, the steering, obstacle avoidance, trajectory planning and monitoring, will be left to the robot, until the desired location is reached or a new goal is indicated by the operator. The navigation algorithm will be based on a multiagent approach and the coordination among the different agents will be done by means of a bidding mechanism. The work will be carried out on real mobile robots, in environments of increasing complexity, on which no a priori knowledge is assumed. Wheeled robots will be used in office-like and smooth outdoor environments and a legged robot will be used in more difficult terrains. In all cases, the navigation task will be carried out autonomously and based only on the visual information provided by the cameras. One of the difficulties of this project is to achieve some sense of orientation in an autonomous robot in such a way that the robot does not get lost when the goal location is temporarily out of sight.