989 resultados para camera networks


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To solve multi-objective problems, multiple reward signals are often scalarized into a single value and further processed using established single-objective problem solving techniques. While the field of multi-objective optimization has made many advances in applying scalarization techniques to obtain good solution trade-offs, the utility of applying these techniques in the multi-objective multi-agent learning domain has not yet been thoroughly investigated. Agents learn the value of their decisions by linearly scalarizing their reward signals at the local level, while acceptable system wide behaviour results. However, the non-linear relationship between weighting parameters of the scalarization function and the learned policy makes the discovery of system wide trade-offs time consuming. Our first contribution is a thorough analysis of well known scalarization schemes within the multi-objective multi-agent reinforcement learning setup. The analysed approaches intelligently explore the weight-space in order to find a wider range of system trade-offs. In our second contribution, we propose a novel adaptive weight algorithm which interacts with the underlying local multi-objective solvers and allows for a better coverage of the Pareto front. Our third contribution is the experimental validation of our approach by learning bi-objective policies in self-organising smart camera networks. We note that our algorithm (i) explores the objective space faster on many problem instances, (ii) obtained solutions that exhibit a larger hypervolume, while (iii) acquiring a greater spread in the objective space.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Wireless Multi-media Sensor Networks (WMSNs) have become increasingly popular in recent years, driven in part by the increasing commoditization of small, low-cost CMOS sensors. As such, the challenge of automatically calibrating these types of cameras nodes has become an important research problem, especially for the case when a large quantity of these type of devices are deployed. This paper presents a method for automatically calibrating a wireless camera node with the ability to rotate around one axis. The method involves capturing images as the camera is rotated and computing the homographies between the images. The camera parameters, including focal length, principal point and the angle and axis of rotation can then recovered from two or more homographies. The homography computation algorithm is designed to deal with the limited resources of the wireless sensor and to minimize energy con- sumption. In this paper, a modified RANdom SAmple Consensus (RANSAC) algorithm is proposed to effectively increase the efficiency and reliability of the calibration procedure.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Person re-identification involves recognising individuals in different locations across a network of cameras and is a challenging task due to a large number of varying factors such as pose (both subject and camera) and ambient lighting conditions. Existing databases do not adequately capture these variations, making evaluations of proposed techniques difficult. In this paper, we present a new challenging multi-camera surveillance database designed for the task of person re-identification. This database consists of 150 unscripted sequences of subjects travelling in a building environment though up to eight camera views, appearing from various angles and in varying illumination conditions. A flexible XML-based evaluation protocol is provided to allow a highly configurable evaluation setup, enabling a variety of scenarios relating to pose and lighting conditions to be evaluated. A baseline person re-identification system consisting of colour, height and texture models is demonstrated on this database.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, sensing coverage by wireless camera-embedded sensor networks (WCSNs), a class of directional sensors is studied. The proposed work facilitates the autonomous tuning of orientation parameters and displacement of camera-sensor nodes in the bounded field of interest (FoI), where the network coverage in terms of every point in the FoI is important. The proposed work is first of its kind to study the problem of maximizing coverage of randomly deployed mobile WCSNs which exploits their mobility. We propose an algorithm uncovered region exploration algorithm (UREA-CS) that can be executed in centralized and distributed modes. Further, the work is extended for two special scenarios: 1) to suit autonomous combing operations after initial random WCSN deployments and 2) to improve the network coverage with occlusions in the FoI. The extensive simulation results show that the performance of UREA-CS is consistent, robust, and versatile to achieve maximum coverage, both in centralized and distributed modes. The centralized and distributed modes are further analyzed with respect to the computational and communicational overheads.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Manual calibration of large and dynamic networks of cameras is labour intensive and time consuming. This is a strong motivator for the development of automatic calibration methods. Automatic calibration relies on the ability to find correspondences between multiple views of the same scene. If the cameras are sparsely placed, this can be a very difficult task. This PhD project focuses on the further development of uncalibrated wide baseline matching techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an overview of our demonstration of a low-bandwidth, wireless camera network where image compression is undertaken at each node. We briefly introduce the Fleck hardware platform we have developed as well as describe the image compression algorithm which runs on individual nodes. The demo will show real-time image data coming back to base as individual camera nodes are added to the network. Copyright 2007 ACM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we describe the recent development of a low-bandwidth wireless camera sensor network. We propose a simple, yet effective, network architecture which allows multiple cameras to be connected to the network and synchronize their communication schedules. Image compression of greater than 90% is performed at each node running on a local DSP coprocessor, resulting in nodes using 1/8th the energy compared to streaming uncompressed images. We briefly introduce the Fleck wireless node and the DSP/camera sensor, and then outline the network architecture and compression algorithm. The system is able to stream color QVGA images over the network to a base station at up to 2 frames per second. © 2007 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

CCTV and surveillance networks are increasingly being used for operational as well as security tasks. One emerging area of technology that lends itself to operational analytics is soft biometrics. Soft biometrics can be used to describe a person and detect them throughout a sparse multi-camera network. This enables them to be used to perform tasks such as determining the time taken to get from point to point, and the paths taken through an environment by detecting and matching people across disjoint views. However, in a busy environment where there are 100's if not 1000's of people such as an airport, attempting to monitor everyone is highly unrealistic. In this paper we propose an average soft biometric, that can be used to identity people who look distinct, and are thus suitable for monitoring through a large, sparse camera network. We demonstrate how an average soft biometric can be used to identify unique people to calculate operational measures such as the time taken to travel from point to point.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distributed Wireless Smart Camera (DWSC) network is a special type of Wireless Sensor Network (WSN) that processes captured images in a distributed manner. While image processing on DWSCs sees a great potential for growth, with its applications possessing a vast practical application domain such as security surveillance and health care, it suffers from tremendous constraints. In addition to the limitations of conventional WSNs, image processing on DWSCs requires more computational power, bandwidth and energy that presents significant challenges for large scale deployments. This dissertation has developed a number of algorithms that are highly scalable, portable, energy efficient and performance efficient, with considerations of practical constraints imposed by the hardware and the nature of WSN. More specifically, these algorithms tackle the problems of multi-object tracking and localisation in distributed wireless smart camera net- works and optimal camera configuration determination. Addressing the first problem of multi-object tracking and localisation requires solving a large array of sub-problems. The sub-problems that are discussed in this dissertation are calibration of internal parameters, multi-camera calibration for localisation and object handover for tracking. These topics have been covered extensively in computer vision literatures, however new algorithms must be invented to accommodate the various constraints introduced and required by the DWSC platform. A technique has been developed for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera internal parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera's optical centre and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. For object localisation, a novel approach has been developed for the calibration of a network of non-overlapping DWSCs in terms of their ground plane homographies, which can then be used for localising objects. In the proposed approach, a robot travels through the camera network while updating its position in a global coordinate frame, which it broadcasts to the cameras. The cameras use this, along with the image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to localised objects moving within the network. In addition, to deal with the problem of object handover between DWSCs of non-overlapping fields of view, a highly-scalable, distributed protocol has been designed. Cameras that follow the proposed protocol transmit object descriptions to a selected set of neighbours that are determined using a predictive forwarding strategy. The received descriptions are then matched at the subsequent camera on the object's path using a probability maximisation process with locally generated descriptions. The second problem of camera placement emerges naturally when these pervasive devices are put into real use. The locations, orientations, lens types etc. of the cameras must be chosen in a way that the utility of the network is maximised (e.g. maximum coverage) while user requirements are met. To deal with this, a statistical formulation of the problem of determining optimal camera configurations has been introduced and a Trans-Dimensional Simulated Annealing (TDSA) algorithm has been proposed to effectively solve the problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Q. Meng and M. H Lee, Automated cross-modal mapping in robotic eye/hand systems using plastic radial basis function networks, Connection Science, 19(1), pp 25-52, 2007.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many multi-camera vision systems the effect of camera locations on the task-specific quality of service is ignored. Researchers in Computational Geometry have proposed elegant solutions for some sensor location problem classes. Unfortunately, these solutions utilize unrealistic assumptions about the cameras' capabilities that make these algorithms unsuitable for many real-world computer vision applications: unlimited field of view, infinite depth of field, and/or infinite servo precision and speed. In this paper, the general camera placement problem is first defined with assumptions that are more consistent with the capabilities of real-world cameras. The region to be observed by cameras may be volumetric, static or dynamic, and may include holes that are caused, for instance, by columns or furniture in a room that can occlude potential camera views. A subclass of this general problem can be formulated in terms of planar regions that are typical of building floorplans. Given a floorplan to be observed, the problem is then to efficiently compute a camera layout such that certain task-specific constraints are met. A solution to this problem is obtained via binary optimization over a discrete problem space. In experiments the performance of the resulting system is demonstrated with different real floorplans.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present the current status of the WASP project, a pair of wide angle photometric telescopes, individually called SuperWASP. SuperWASP-I is located in La Palma, and SuperWASP-II at Sutherland in South Africa. SW-I began operations in April 2004. SW-II is expected to be operational in early 2006. Each SuperWASP instrument consists of up to 8 individual cameras using ultra-wide field lenses backed by high-quality passively cooled CCDs. Each camera covers 7.8 x 7.8 sq degrees of sky, for nearly 500 sq degrees of total sky coverage. One of the current aims of the WASP project is the search for extra-solar planet transits with a focus on brighter stars in the magnitude range similar to 8 to 13. Additionally, WASP will search for, optical transients, track Near-Earth Objects, and study many types of variable stars and extragalactic objects. The collaboration has developed a custom-built reduction pipeline that achieves better than I percent photometric precision. We discuss future goals, which include: nightly on-mountain reductions that could be used to automatically drive alerts via a small robotic telescope network, and possible roles of the WASP telescopes as providers in such a network. Additional technical details of the telescopes, data reduction, and consortium members and institutions can be found on the web site at: http://www.superwasp.org/. (c) 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Report of a research project of the Fachhochschule Hannover, University of Applied Sciences and Arts, Department of Information Technologies. Automatic face recognition increases the security standards at public places and border checkpoints. The picture inside the identification documents could widely differ from the face, that is scanned under random lighting conditions and for unknown poses. The paper describes an optimal combination of three key algorithms of object recognition, that are able to perform in real time. The camera scan is processed by a recurrent neural network, by a Eigenfaces (PCA) method and by a least squares matching algorithm. Several examples demonstrate the achieved robustness and high recognition rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die thermische Verarbeitung von Lebensmitteln beeinflusst deren Qualität und ernährungsphysiologischen Eigenschaften. Im Haushalt ist die Überwachung der Temperatur innerhalb des Lebensmittels sehr schwierig. Zudem ist das Wissen über optimale Temperatur- und Zeitparameter für die verschiedenen Speisen oft unzureichend. Die optimale Steuerung der thermischen Zubereitung ist maßgeblich abhängig von der Art des Lebensmittels und der äußeren und inneren Temperatureinwirkung während des Garvorgangs. Das Ziel der Arbeiten war die Entwicklung eines automatischen Backofens, der in der Lage ist, die Art des Lebensmittels zu erkennen und die Temperatur im Inneren des Lebensmittels während des Backens zu errechnen. Die für die Temperaturberechnung benötigten Daten wurden mit mehreren Sensoren erfasst. Hierzu kam ein Infrarotthermometer, ein Infrarotabstandssensor, eine Kamera, ein Temperatursensor und ein Lambdasonde innerhalb des Ofens zum Einsatz. Ferner wurden eine Wägezelle, ein Strom- sowie Spannungs-Sensor und ein Temperatursensor außerhalb des Ofens genutzt. Die während der Aufheizphase aufgenommen Datensätze ermöglichten das Training mehrerer künstlicher neuronaler Netze, die die verschiedenen Lebensmittel in die entsprechenden Kategorien einordnen konnten, um so das optimale Backprogram auszuwählen. Zur Abschätzung der thermische Diffusivität der Nahrung, die von der Zusammensetzung (Kohlenhydrate, Fett, Protein, Wasser) abhängt, wurden mehrere künstliche neuronale Netze trainiert. Mit Ausnahme des Fettanteils der Lebensmittel konnten alle Komponenten durch verschiedene KNNs mit einem Maximum von 8 versteckten Neuronen ausreichend genau abgeschätzt werden um auf deren Grundlage die Temperatur im inneren des Lebensmittels zu berechnen. Die durchgeführte Arbeit zeigt, dass mit Hilfe verschiedenster Sensoren zur direkten beziehungsweise indirekten Messung der äußeren Eigenschaften der Lebensmittel sowie KNNs für die Kategorisierung und Abschätzung der Lebensmittelzusammensetzung die automatische Erkennung und Berechnung der inneren Temperatur von verschiedensten Lebensmitteln möglich ist.