INTRODUCTION
Separation Bar

The purpose of this project is twofold: first, we aim to develop an underwater robot able to navigate autonomously while exploring the ocean floor; secondly, to build the software that allows the construction of geo-referenced photo-mosaics of the seafloor, and the monitoring of their evolution over time. These video-based mosaics will serve as base maps for environmental monitoring of interest areas, allowing change detection of biological communities and their environment in the temporal scale, and enabling a new way to visualize the evolution of wide areas in that temporal scale.

Present day science missions using underwater robots greatly benefit from equipping the robot with a still photographic camera and an acoustic camera (Hansen and Andersen 1996). The still camera is required to map areas in which very high resolution or multispectral information is needed to be able to classify biological communities, corals, etc., while the acoustic camera (called Echoscope) will provide range data for three-dimensional (3D) scene reconstruction.

Consider a scenario where we want to monitor the evolution of a given area of the ocean floor. If this area is bigger than a few meters, or if it is located at considerable depth, the use of underwater vehicles and photomosaicing techniques merges the best of two worlds: on one side we gain a global view of the interest site through a georeferenced photomosaic and, on the other hand, we can access sites a depths below those accessible by divers equipped with cameras. Experience shows that acquiring images using a teleoperated robot to be processed later to create a photomosaic is a very tedious and error prone task. If a full coverage of the interest site is required, it is necessary to plan the robot trajectory so that overlap between adjacent transects allows the use of photomosaicing techniques.

The use of Autonomous Underwater Vehicles (AUVs), equipped with appropriate sensors and an on-line mosaicing strategies, ensures a full coverage with no gaps during the image acquisition phase, and releases the pilot from this tedious task. Unfortunately, all the AUVs that exist in the market are designed to fly at "high" altitudes (beyond visual range of the seafloor), which allows bathymetric surveying but is not appropriate for photo surveying. Hovering AUVs, that fly close to the seafloor at very slow speeds using obstacle avoidance navigation techniques, are not available commercially. For this reason, we propose in this project to build a hover-type AUV, able to navigate at "visual" ranges from the seafloor. The vehicle will navigate at a low speed (about 1 meter/second) and will have a high manoeuvrability to automatically avoid obstacles in a safe way. It will be deployed from a ship, with the task of (optically) mapping a selected area. At the same time as the robot moves, it will estimate the coverage of the images it is acquiring, planning its trajectory according to this information, revisiting any area that has eventually not been covered by the robot (Negahdaripour and Xu 2002).

At the point data acquisition is finished, the vehicle will emerge at the predefined location to be recovered by the ship. Once in the ship, the acquired images, along with its associated navigation data (UTM or other geographic coordinates, heading, attitude, etc.) will be downloaded to a computer that will provide a first georeferenced photomosaic of the area in a matter of minutes. This preliminary mosaic will not align perfectly the images, but will provide a first overview of the surveyed area. Further processing will generate a seamless photomosaic using time-consuming global alignment techniques (Gracias et al. 2006).

Subsequent visits to the same area will allow a new exploration following the same procedure to obtain a new photomosaic, and allow scientists to monitor temporal changes in the seafloor to study the processes that cause them (biological, ecological, geological, or human interventions, for example). This project will develop the algorithms to automatically compare photomosaics of the same area acquired at different times. These algorithms will point the scientists towards potential differences measured in both photomosaics. The algorithms to develop will take into account the errors in the geographic location of the mosaics, differences in lighting (Neumann and Neumann 2006, Neumann and Cadik 2006), sensitivity of the camera and scale of mosaics acquired at different times.

<< Next >>