MONDEGO: Surveillance and MONitoring baseD on low-power intEGrated visiOn devices (MONDEGO)

The main goal of this project is building a hardware platform for the development of distributed, co-operative and collaborative, vision applications, with special emphasis in outdoor surveillance and monitoring with a limited infrastructure.

This platform will consist in a wireless network of smart cameras, able to locally process images, extract features and analyze the scene. Wireless communication between the nodes permits the exchange of information in order to realize distributed vision algorithms, or transmitting information towards a base station. The implementation of the smart camera is oriented to reducing power consumption. A single chip contains the image sensor and, concurrently, the processing and memory elements that are needed to realize low-level vision tasks in a fully parallel manner. This renders an efficient use of the resources and permits the on-chip generation of simplified representations of the scene. These representations are quite useful in the realization of higher order cognitive tasks, like objects and events classification and scene interpretation.

Surveillance systems based on distributed cameras developed so far have required a considerable infrastructure. Power supply must be conveyed to every node. A great deal of information needs to be transmitted through the network. Also, a powerful processing facility is required to handle such an information flow. On the other side, wireless sensor networks have been employed to environmental monitoring and surveillance. However, they realize tracking of some scalar magnitudes (pressure, temperature, humidity, chemicals concentration, etc.) by taking sensor readings and transmitting them to the base station. The use of different sensor modalities with a more complex data structure or a larger number of data has been typically excluded. In the case of vision, despite being our most valuable tool to extract information from the environment, we have not been able to incorporate vision capabilities into the sensor network nodes without compromising their autonomy.

Our intention is to demonstrate that a careful design, the proper partitioning of the system and a holistic approach to reducing the power consumption, can lead to a wireless low-power smart camera that allows the deployment of a low-cost scalable camera network in a scenario with limited infrastructure. The approach will be to find the convergence between wireless sensor networks and distributed smart cameras in order to develop a platform for surveillance and monitoring.

Objectives

The main objective of this project is building a hardware platform oriented to surveillance and monitoring with limited infrastructure. It will consist of a network of autonomous sensors, with local image processing and scene analysis capabilities, and able to wirelessly communicate to each other.

The partial objectives are:

    The efficient implementation of concurrent sensors and processors, capable of generating a simplified representation of the scene. The processing scheme will be bio-inspired. Maximum power consumption for the smart sensor will be 250mW.
    The implementation smart vision sensor chips in vertical integration technologies. This will solve the trade-off between processing speed and spatial resolution. The targeted image size will be 800×600 pixels (SVGA).
    The incorporation of CMOS-compatible light sensing structures for 3D information extraction. We pretend to start a research line on 2D-3D information interaction.
    On-chip generation of an alternative scene representation, based on salient points and characteristic features. This will be done at least at 25fps.
    Including power management techniques at system level, in order to combine the use of low-power operators, hardware re-use and energy scavenging.
    The design of distributed vision algorithms based on the exchange of high-level information between nodes. This will take the form of collaborative tracking of elements through the area under surveillance. It can be also applied to events or gesture detection and interpretation with combined information from different points of view.
    The interface of the integrated vision system with the communications infrastructure. The information flow must be agile without compromising the power budget. We will make use of a commercial platform in principle, although we may work in the protocol and the MAC and LLC sub-layers.
    The development of a demonstrator in exploiting the capabilities of wireless smart cameras. This demonstrator will be developed in a scenario in which there will be a need for node autonomy, wireless communication, easy network deployment and low maintenance. Given the results obtained in projects V-mote and WiVisNet, the demonstrator will be oriented to surveillance in natural environments for early wildfire detection, wildlife tracking or perimeter monitoring.

Link to the Project Website