870 resultados para Smart Camera


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Con un sistema di multivisione correttamente installato e calibrato, si è tracciato il moto di un drone a seguito di operazioni di triangolazione e ne si è controllata automaticamente la traiettoria.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose an approach based on self-interested autonomous cameras, which exchange responsibility for tracking objects in a market mechanism, in order to maximise their own utility. A novel ant-colony inspired mechanism is used to grow the vision graph during runtime, which may then be used to optimise communication between cameras. The key benefits of our completely decentralised approach are on the one hand generating the vision graph online which permits the addition and removal cameras to the network during runtime and on the other hand relying only on local information, increasing the robustness of the system. Since our market-based approach does not rely on a priori topology information, the need for any multi-camera calibration can be avoided. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this article we present an approach to object tracking handover in a network of smart cameras, based on self-interested autonomous agents, which exchange responsibility for tracking objects in a market mechanism, in order to maximise their own utility. A novel ant-colony inspired mechanism is used to learn the vision graph, that is, the camera neighbourhood relations, during runtime, which may then be used to optimise communication between cameras. The key benefits of our completely decentralised approach are on the one hand generating the vision graph online, enabling efficient deployment in unknown scenarios and camera network topologies, and on the other hand relying only on local information, increasing the robustness of the system. Since our market-based approach does not rely on a priori topology information, the need for any multicamera calibration can be avoided. We have evaluated our approach both in a simulation study and in network of real distributed smart cameras.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Smart cameras allow pre-processing of video data on the camera instead of sending it to a remote server for further analysis. Having a network of smart cameras allows various vision tasks to be processed in a distributed fashion. While cameras may have different tasks, we concentrate on distributed tracking in smart camera networks. This application introduces various highly interesting problems. Firstly, how can conflicting goals be satisfied such as cameras in the network try to track objects while also trying to keep communication overhead low? Secondly, how can cameras in the network self adapt in response to the behavior of objects and changes in scenarios, to ensure continued efficient performance? Thirdly, how can cameras organise themselves to improve the overall network's performance and efficiency? This paper presents a simulation environment, called CamSim, allowing distributed self-adaptation and self-organisation algorithms to be tested, without setting up a physical smart camera network. The simulation tool is written in Java and hence allows high portability between different operating systems. Relaxing various problems of computer vision and network communication enables a focus on implementing and testing new self-adaptation and self-organisation algorithms for cameras to use.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we study the self-organising behaviour of smart camera networks which use market-based handover of object tracking responsibilities to achieve an efficient allocation of objects to cameras. Specifically, we compare previously known homogeneous configurations, when all cameras use the same marketing strategy, with heterogeneous configurations, when each camera makes use of its own, possibly different marketing strategy. Our first contribution is to establish that such heterogeneity of marketing strategies can lead to system wide outcomes which are Pareto superior when compared to those possible in homogeneous configurations. However, since the particular configuration required to lead to Pareto efficiency in a given scenario will not be known in advance, our second contribution is to show how online learning of marketing strategies at the individual camera level can lead to high performing heterogeneous configurations from the system point of view, extending the Pareto front when compared to the homogeneous case. Our third contribution is to show that in many cases, the dynamic behaviour resulting from online learning leads to global outcomes which extend the Pareto front even when compared to static heterogeneous configurations. Our evaluation considers results obtained from an open source simulation package as well as data from a network of real cameras. © 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A recent trend in smart camera networks is that they are able to modify the functionality during runtime to better reflect changes in the observed scenes and in the specified monitoring tasks. In this paper we focus on different configuration methods for such networks. A configuration is given by three components: (i) a description of the camera nodes, (ii) a specification of the area of interest by means of observation points and the associated monitoring activities, and (iii) a description of the analysis tasks. We introduce centralized, distributed and proprioceptive configuration methods and compare their properties and performance. © 2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study heterogeneity among nodes in self-organizing smart camera networks, which use strategies based on social and economic knowledge to target communication activity efficiently. We compare homogeneous configurations, when cameras use the same strategy, with heterogeneous configurations, when cameras use different strategies. Our first contribution is to establish that static heterogeneity leads to new outcomes that are more efficient than those possible with homogeneity. Next, two forms of dynamic heterogeneity are investigated: nonadaptive mixed strategies and adaptive strategies, which learn online. Our second contribution is to show that mixed strategies offer Pareto efficiency consistently comparable with the most efficient static heterogeneous configurations. Since the particular configuration required for high Pareto efficiency in a scenario will not be known in advance, our third contribution is to show how decentralized online learning can lead to more efficient outcomes than the homogeneous case. In some cases, outcomes from online learning were more efficient than all other evaluated configuration types. Our fourth contribution is to show that online learning typically leads to outcomes more evenly spread over the objective space. Our results provide insight into the relationship between static, dynamic, and adaptive heterogeneity, suggesting that all have a key role in achieving efficient self-organization.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper we present increased adaptivity and robustness in distributed object tracking by multi-camera networks using a socio-economic mechanism for learning the vision graph. To build-up the vision graph autonomously within a distributed smart-camera network, we use an ant-colony inspired mechanism, which exchanges responsibility for tracking objects using Vickrey auctions. Employing the learnt vision graph allows the system to optimise its communication continuously. Since distributed smart camera networks are prone to uncertainties in individual cameras, such as failures or changes in extrinsic parameters, the vision graph should be sufficiently robust and adaptable during runtime to enable seamless tracking and optimised communication. To better reflect real smart-camera platforms and networks, we consider that communication and handover are not instantaneous, and that cameras may be added, removed or their properties changed during runtime. Using our dynamic socio-economic approach, the network is able to continue tracking objects well, despite all these uncertainties, and in some cases even with improved performance. This demonstrates the adaptivity and robustness of our approach.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Smart cameras perform on-board image analysis, adapt their algorithms to changes in their environment, and collaborate with other networked cameras to analyze the dynamic behavior of objects. A proposed computational framework adopts the concepts of self-awareness and self-expression to more efficiently manage the complex tradeoffs among performance, flexibility, resources, and reliability. The Web extra at http://youtu.be/NKe31-OKLz4 is a video demonstrating CamSim, a smart camera simulation tool, enables users to test self-adaptive and self-organizing smart-camera techniques without deploying a smart-camera network.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Il lavoro svolto in questa tesi si colloca nell’area della robotica aerea e della visione artificiale attraverso l’integrazione di algoritmi di visione per il controllo di un velivolo senza pilota. Questo lavoro intende dare un contributo al progetto europeo SHERPA (Smart collaboration between Humans and ground-aErial Robots for imProving rescuing activities in Alpine environments), coordinato dall’università di Bologna e con la compartecipazione delle università di Brema, Zurigo, Twente, Leuven, Linkopings, del CREATE (Consorzio di Ricerca per l’Energia e le Applicazioni Tecnologiche dell’Elettromagnetismo), di alcune piccole e medie imprese e del club alpino italiano, che consiste nel realizzare un team di robots eterogenei in grado di collaborare con l’uomo per soccorrere i dispersi nell’ambiente alpino. L’obiettivo di SHERPA consiste nel progettare e integrare l’autopilota all’interno del team. In tale contesto andranno gestiti problemi di grande complessità, come il controllo della stabilità del velivolo a fronte di incertezze dovute alla presenza di vento, l’individuazione di ostacoli presenti nella traiettoria di volo, la gestione del volo in prossimità di ostacoli, ecc. Inoltre tutte queste operazioni devono essere svolte in tempo reale. La tesi è stata svolta presso il CASY (Center for Research on Complex Automated Systems) dell’università di Bologna, utilizzando per le prove sperimentali una PX4FLOW Smart Camera. Inizialmente è stato studiato un autopilota, il PIXHAWK, sul quale è possibile interfacciare la PX4FLOW, in seguito sono stati studiati e simulati in MATLAB alcuni algoritmi di visione basati su flusso ottico. Infine è stata studiata la PX4FLOW Smart Camera, con la quale sono state svolte le prove sperimentali. La PX4FLOW viene utilizzata come interfaccia alla PIXHAWK, in modo da eseguire il controllo del velivolo con la massima efficienza. E’ composta da una telecamera per la ripresa della scena, un giroscopio per la misura della velocità angolare, e da un sonar per le misure di distanza. E’ in grado di fornire la velocità di traslazione del velivolo, e quest’ultima, integrata, consente di ricostruire la traiettoria percorsa dal velivolo.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To solve multi-objective problems, multiple reward signals are often scalarized into a single value and further processed using established single-objective problem solving techniques. While the field of multi-objective optimization has made many advances in applying scalarization techniques to obtain good solution trade-offs, the utility of applying these techniques in the multi-objective multi-agent learning domain has not yet been thoroughly investigated. Agents learn the value of their decisions by linearly scalarizing their reward signals at the local level, while acceptable system wide behaviour results. However, the non-linear relationship between weighting parameters of the scalarization function and the learned policy makes the discovery of system wide trade-offs time consuming. Our first contribution is a thorough analysis of well known scalarization schemes within the multi-objective multi-agent reinforcement learning setup. The analysed approaches intelligently explore the weight-space in order to find a wider range of system trade-offs. In our second contribution, we propose a novel adaptive weight algorithm which interacts with the underlying local multi-objective solvers and allows for a better coverage of the Pareto front. Our third contribution is the experimental validation of our approach by learning bi-objective policies in self-organising smart camera networks. We note that our algorithm (i) explores the objective space faster on many problem instances, (ii) obtained solutions that exhibit a larger hypervolume, while (iii) acquiring a greater spread in the objective space.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Il percorso di tesi che ho intrapreso è stato svolto presso l'azienda Datalogic, con l'intento di integrare un sistema di visione ad un sistema di marcatura laser. L'utilizzo di questo potente strumento è però vincolato dalla particolare posizione fisica occupata, di volta in volta, dall'oggetto; per questo motivo viene fissato nella posizione desiderata, attraverso dime meccaniche. Fin ad ora si riteneva assolutamente necessaria la presenza di un operatore per il controllo del corretto posizionamento, tramite una simulazione della marcatura. Per ovviare a questo limite strutturale, Datalogic ha pensato di introdurre uno strumento di aiuto e di visione del processo: la camera. L'idea di base è stata quella di impiegare le moderne smart camera per individuare l'oggetto da marcare e rendere quindi il processo più automatico possibile. Per giungere a questo risultato è stato necessario effettuare una calibrazione del sistema totale: Camera più Laser. Il mio studio si è focalizzato quindi nel creare un eseguibile che aiutasse il cliente ad effettuare questa operazione nella maniera più semplice possibile. E' stato creato un eseguibile in C# che mettesse in comunicazione i due dispositivi ed eseguisse la calibrazione dei parametri intrinseci ed estrinseci. Il risultato finale ha permesso di avere il sistema di riferimento mondo della camera coincidente con quello del piano di marcatura del laser. Ne segue che al termine del processo di calibrazione se un oggetto verrà rilevato dalla camera, avente il baricentro nella posizione (10,10), il laser, utilizzando le medesime coordinate, marcherà proprio nel baricentro dell'oggetto desiderato. La maggiore difficoltà riscontrata è stata la differenza dei software che permettono la comunicazione con i due dispositivi e la creazione di una comunicazione con il laser, non esistente prima in C#.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Obesity is becoming an epidemic phenomenon in most developed countries. The fundamental cause of obesity and overweight is an energy imbalance between calories consumed and calories expended. It is essential to monitor everyday food intake for obesity prevention and management. Existing dietary assessment methods usually require manually recording and recall of food types and portions. Accuracy of the results largely relies on many uncertain factors such as user's memory, food knowledge, and portion estimations. As a result, the accuracy is often compromised. Accurate and convenient dietary assessment methods are still blank and needed in both population and research societies. In this thesis, an automatic food intake assessment method using cameras, inertial measurement units (IMUs) on smart phones was developed to help people foster a healthy life style. With this method, users use their smart phones before and after a meal to capture images or videos around the meal. The smart phone will recognize food items and calculate the volume of the food consumed and provide the results to users. The technical objective is to explore the feasibility of image based food recognition and image based volume estimation. This thesis comprises five publications that address four specific goals of this work: (1) to develop a prototype system with existing methods to review the literature methods, find their drawbacks and explore the feasibility to develop novel methods; (2) based on the prototype system, to investigate new food classification methods to improve the recognition accuracy to a field application level; (3) to design indexing methods for large-scale image database to facilitate the development of new food image recognition and retrieval algorithms; (4) to develop novel convenient and accurate food volume estimation methods using only smart phones with cameras and IMUs. A prototype system was implemented to review existing methods. Image feature detector and descriptor were developed and a nearest neighbor classifier were implemented to classify food items. A reedit card marker method was introduced for metric scale 3D reconstruction and volume calculation. To increase recognition accuracy, novel multi-view food recognition algorithms were developed to recognize regular shape food items. To further increase the accuracy and make the algorithm applicable to arbitrary food items, new food features, new classifiers were designed. The efficiency of the algorithm was increased by means of developing novel image indexing method in large-scale image database. Finally, the volume calculation was enhanced through reducing the marker and introducing IMUs. Sensor fusion technique to combine measurements from cameras and IMUs were explored to infer the metric scale of the 3D model as well as reduce noises from these sensors.