842 resultados para wireless sensors network
Resumo:
This paper discusses the target localization problem in wireless visual sensor networks. Additive noises and measurement errors will affect the accuracy of target localization when the visual nodes are equipped with low-resolution cameras. In the goal of improving the accuracy of target localization without prior knowledge of the target, each node extracts multiple feature points from images to represent the target at the sensor node level. A statistical method is presented to match the most correlated feature point pair for merging the position information of different sensor nodes at the base station. Besides, in the case that more than one target exists in the field of interest, a scheme for locating multiple targets is provided. Simulation results show that, our proposed method has desirable performance in improving the accuracy of locating single target or multiple targets. Results also show that the proposed method has a better trade-off between camera node usage and localization accuracy.
Resumo:
Until a few years ago, most of the network communications were based in the wire as the physical media, but due to the advances and the maturity of the wireless communications, this is changing. Nowadays wireless communications offers fast, secure, efficient and reliable connections. Mobile communications are in expansion, clearly driven by the use of smart phones and other mobile devices, the use of laptops, etc… Besides that point, the inversion in the installation and maintenance of the physical medium is much lower than in wired communications, not only because the air has no cost, but because the installation and maintenance of the wire require a high economic cost. Besides the economic cost we find that wire is a more vulnerable medium to external threats such as noise, sabotages, etc… There are two different types of wireless networks: those which the structure is part of the network itself and those which have a lack of structure or any centralization, in a way that the devices that form part of the network can connect themselves in a dynamic and random way, handling also the routing of every control and information messages, this kind of networks is known as Ad-hoc. In the present work we will proceed to study one of the multiple wireless protocols that allows mobile communications, it is Optimized Link State Routing, from now on, OLSR, it is an pro-active routing, standard mechanism that works in a distributed in order to stablish the connections among the different nodes that belong to a wireless network. Thanks to this protocol it is possible to get all the routing tables in all the devices correctly updated every moment through the periodical transmission of control messages and on this way allow a complete connectivity among the devices that are part of the network and also, allow access to other external networks such as virtual private networks o Internet. This protocol could be perfectly used in environments such as airports, malls, etc… The update of the routing tables in all the devices is got thanks to the periodical transmission of control messages and finally it will offer connectivity among all the devices and the corresponding external networks. For the study of OLSR protocol we will have the help of the network simulator “Network Simulator 2”, a freeware network simulator programmed in C++ based in discrete events. This simulator is used mainly in educational and research environments and allows a very extensive range of protocols, both, wired networks protocols and wireless network protocols, what is going to be really useful to proceed to the simulation of different configurations of networks and protocols. In the present work we will also study different simulations with Network Simulator 2, in different scenarios with different configurations, wired networks, and Ad-hoc networks, where we will study OLSR Protocol. RESUMEN. Hasta hace pocos años, la mayoría de las comunicaciones de red estaban basadas en el cable como medio físico pero debido al avance y madurez alcanzados en el campo de las comunicaciones inalámbricas esto está cambiando. Hoy día las comunicaciones inalámbricas nos ofrecen conexiones veloces, seguras, eficientes y fiables. Las comunicaciones móviles se encuentran en su momento de máxima expansión, claramente impulsadas por el uso de teléfonos y demás dispositivos móviles, el uso de portátiles, etc… Además la inversión a realizar en la instalación y el mantenimiento del medio físico en las comunicaciones móviles es muchísimo menor que en comunicaciones por cable, ya no sólo porque el aire no tenga coste alguno, sino porque la instalación y mantenimiento del cable precisan de un elevado coste económico por norma. Además del coste económico nos encontramos con que es un medio más vulnerable a amenazas externas tales como el ruido, escuchas no autorizadas, sabotajes, etc… Existen dos tipos de redes inalámbricas: las constituidas por una infraestructura que forma parte más o menos de la misma y las que carecen de estructura o centralización alguna, de modo que los dispositivos que forman parte de ella pueden conectarse de manera dinámica y arbitraria entre ellos, encargándose además del encaminamiento de todos los mensajes de control e información, a este tipo de redes se las conoce como redes Ad-hoc. En el presente Proyecto de Fin de Carrera se procederá al estudio de uno de los múltiples protocolos inalámbricos que permiten comunicaciones móviles, se trata del protocolo inalámbrico Optimized Link State Routing, de ahora en adelante OLSR, un mecanismo estándar de enrutamiento pro-activo, que trabaja de manera distribuida para establecer las conexiones entre los nodos que formen parte de las redes inalámbricas Ad-hoc, las cuales carecen de un nodo central y de una infraestructura pre-existente. Gracias a este protocolo es posible conseguir que todos los equipos mantengan en todo momento las tablas de ruta actualizadas correctamente mediante la transmisión periódica de mensajes de control y así permitir una completa conectividad entre todos los equipos que formen parte de la red y, a su vez, también permitir el acceso a otras redes externas tales como redes privadas virtuales o Internet. Este protocolo sería usado en entornos tales como aeropuertos La actualización de las tablas de enrutamiento de todos los equipos se conseguirá mediante la transmisión periódica de mensajes de control y así finalmente se podrá permitir conectividad entre todos los equipos y con las correspondientes redes externas. Para el estudio del protocolo OLSR contaremos con el simulador de redes Network Simulator 2, un simulador de redes freeware programado en C++ basado en eventos discretos. Este simulador es usado principalmente en ambientes educativos y de investigación y permite la simulación tanto de protocolos unicast como multicast. El campo donde más se utiliza es precisamente en el de la investigación de redes móviles Ad-hoc. El simulador Network Simulator 2 no sólo implementa el protocolo OLSR, sino que éste implementa una amplia gama de protocolos, tanto de redes cableadas como de redes inalámbricas, lo cual va a sernos de gran utilidad para proceder a la simulación de distintas configuraciones de redes y protocolos. En el presente Proyecto de Fin de Carrera se estudiarán también diversas simulaciones con el simulador NS2 en diferentes escenarios con diversas configuraciones; redes cableadas, redes inalámbricas Ad-hoc, donde se estudiará el protocolo antes mencionado: OLSR. Este Proyecto de Fin de Carrera consta de cuatro apartados distintos: Primeramente se realizará el estudio completo del protocolo OLSR, se verán los beneficios y contrapartidas que ofrece este protocolo inalámbrico. También se verán los distintos tipos de mensajes existentes en este protocolo y unos pequeños ejemplos del funcionamiento del protocolo OLSR. Seguidamente se hará una pequeña introducción al simulador de redes Network Simulator 2, veremos la historia de este simulador, y también se hará referencia a la herramienta extra NAM, la cual nos permitirá visualizar el intercambio de paquetes que se produce entre los diferentes dispositivos de nuestras simulaciones de forma intuitiva y amigable. Se hará mención a la plataforma MASIMUM, encargada de facilitar en un entorno académico software y documentación a sus alumnos con el fin de facilitarles la investigación y la simulación de redes y sensores Ad-hoc. Finalmente se verán dos ejemplos, uno en el que se realizará una simulación entre dos PCs en un entorno Ethernet y otro ejemplo en el que se realizará una simulación inalámbrica entre cinco dispositivos móviles mediante el protocolo a estudiar, OLSR.
Resumo:
Information Technology and Communications (ICT) is presented as the main element in order to achieve more efficient and sustainable city resource management, while making sure that the needs of the citizens to improve their quality of life are satisfied. A key element will be the creation of new systems that allow the acquisition of context information, automatically and transparently, in order to provide it to decision support systems. In this paper, we present a novel distributed system for obtaining, representing and providing the flow and movement of people in densely populated geographical areas. In order to accomplish these tasks, we propose the design of a smart sensor network based on RFID communication technologies, reliability patterns and integration techniques. Contrary to other proposals, this system represents a comprehensive solution that permits the acquisition of user information in a transparent and reliable way in a non-controlled and heterogeneous environment. This knowledge will be useful in moving towards the design of smart cities in which decision support on transport strategies, business evaluation or initiatives in the tourism sector will be supported by real relevant information. As a final result, a case study will be presented which will allow the validation of the proposal.
Resumo:
A comunicação e transmissão de informação sem fios tornou - se uma realidade cada vez mais utilizada pelas sociedades contemporâneas. A nível profissional, as forças armadas de cada país acharam conveniente modernizar os seus meios, por forma a aumentar a eficiência e a segurança em determinadas tarefas. Nesse sentido, o Exército português adquiriu um robot (ROVIM) cuja função é desempenhar ações de reconhecimento e vigilância de modo a obter informações de forma segura. O objetivo desta dissertação é dimensionar e construir uma antena para controlo wireless do robot (ROVIM). As especificações técnicas desta antena requerem dois modos de operação, um com uma largura de feixe larga e outro com uma largura de feixe estreita. Para alcançar esses objetivos dimensionou-se e construiu-se duas antenas. Na dissertação são construídas duas antenas, a primeira é uma antena Yagi – Uda convencional e a segunda é uma antena com uma estrutura nova que permite a regulação do ganho e da largura de feixe a -3 dB. A primeira antena será o modelo base da segunda antena, que apresenta a inovação do controlo das caraterísticas de radiação. Esse controlo é possível através da introdução de díodos e do respetivo circuito de polarização na estrutura da antena. Inicialmente, as antenas foram dimensionadas e simuladas recorrendo ao programa de simulação CST MWS, de modo a operarem na banda dos 2,4 GHz. Após a construção das antenas, as caraterísticas de radiação foram medidas recorrendo à câmara anecoica e ao network analyzer, permitindo assim a comparação dos resultados medidos com os simulados.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
A specialised reconfigurable architecture is targeted at wireless base-band processing. It is built to cater for multiple wireless standards. It has lower power consumption than the processor-based solution. It can be scaled to run in parallel for processing multiple channels. Test resources are embedded on the architecture and testing strategies are included. This architecture is functionally partitioned according to the common operations found in wireless standards, such as CRC error correction, convolution and interleaving. These modules are linked via Virtual Wire Hardware modules and route-through switch matrices. Data can be processed in any order through this interconnect structure. Virtual Wire ensures the same flexibility as normal interconnects, but the area occupied and the number of switches needed is reduced. The testing algorithm scans all possible paths within the interconnection network exhaustively and searches for faults in the processing modules. The testing algorithm starts by scanning the externally addressable memory space and testing the master controller. The controller then tests every switch in the route-through switch matrix by making loops from the shared memory to each of the switches. The local switch matrix is also tested in the same way. Next the local memory is scanned. Finally, pre-defined test vectors are loaded into local memory to check the processing modules. This paper compares various base-band processing solutions. It describes the proposed platform and its implementation. It outlines the test resources and algorithm. It concludes with the mapping of Bluetooth and GSM base-band onto the platform.
Resumo:
The design of dual-band 2.45/5.2 GHz antenna for an acces point of a Wireless Local Area Network (LAN) is presented. The proposed antenna is formed by a Radial Line Slot Array (RLSA) operating at 2.4 GHz and a Microstrip patch working at 5.2 GHz, both featuring circular polarization. The design of this antenna system is accomplished using commercially available Finite Element software. High Frequency Structure Simulator (HFSS) of Ansoft and an in-house developed iteration procedure. The performance of the designed antenna is assessed in terms of return loss (RL), radiation pattern and polarization purity in the two frequency bands.
Resumo:
A variety of current and future wired and wireless networking technologies can be transformed into a seamless communication environments through application of context-based vertical handovers. Such seamless communication environments are needed for future pervasive/ubiquitous systems. Pervasive systems are context aware and need to adapt to context changes, including network disconnections and changes in network Quality of Service (QoS). Vertical handover is one of many possible adaptation methods. It allows users to roam freely between heterogeneous networks while maintaining the continuity of their applications. This paper proposes a vertical handover mechanism suitable for multimedia applications in pervasive systems. The paper focuses on the handover decision making process which uses context information regarding user devices, user location, network environment and requested QoS. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Infrastructureless networks are becoming more popular with the increased prevalence of wireless networking technology. A significant challenge faced by these infrastructureless networks is that of providing security. In this paper we examine the issue of authentication, a fundamental component of most security approaches, and show how it can be performed despite an absence of trusted infrastructure and limited or no existing trust relationship between network nodes. Our approach enables nodes to authenticate using a combination of contextual information, harvested from the environment, and traditional authentication factors (such as public key cryptography). Underlying our solution is a generic threshold signature scheme that enables distributed generation of digital certificates.
Resumo:
Identifying water wastage in forms of leaks in a water distribution network of any city becomes essential as droughts are presenting serious threats to few major cities. In this paper, we propose a deployment of sensor network for monitoring water flow in any water distribution network. We cover the issues related with designing such a dedicated sensor network by considering types of sensors required, sensors' functionality, data collection, and providing computation serving as leak detection mechanism. The main focus of this paper is on appropriate network segmentation that provides the base for hierarchical approach to pipes' failure detection. We show a method for sensors allocation to the network in order to facilitate effective pipes monitoring. In general, the identified computational problem belongs to hard problems. The paper shows a heuristic method to build effective hierarchy of the network segmentation.
Resumo:
Internet of Things (IoT) can be defined as a “network of networks” composed by billions of uniquely identified physical Smart Objects (SO), organized in an Internet-like structure. Smart Objects can be items equipped with sensors, consumer devices (e.g., smartphones, tablets, or wearable devices), and enterprise assets that are connected both to the Internet and to each others. The birth of the IoT, with its communications paradigms, can be considered as an enabling factor for the creation of the so-called Smart Cities. A Smart City uses Information and Communication Technologies (ICT) to enhance quality, performance and interactivity of urban services, ranging from traffic management and pollution monitoring to government services and energy management. This thesis is focused on multi-hop data dissemination within IoT and Smart Cities scenarios. The proposed multi-hop techniques, mostly based on probabilistic forwarding, have been used for different purposes: from the improvement of the performance of unicast protocols for Wireless Sensor Networks (WSNs) to the efficient data dissemination within Vehicular Ad-hoc NETworks (VANETs).
Resumo:
Wireless Mesh Networks (WMNs) have emerged as a key technology for the next generation of wireless networking. Instead ofbeing another type of ad-hoc networking, WMNs diversify the capabilities of ad-hoc networks. There are many kinds of protocols that work over WMNs, such as IEEE 802.11a/b/g, 802.15 and 802.16. To bring about a high throughput under varying conditions, these protocols have to adapt their transmission rate. While transmission rate is a significant part, only a few algorithms such as Auto Rate Fallback (ARF) or Receiver Based Auto Rate (RBAR) have been published. In this paper we will show MAC, packet loss and physical layer conditions play important role for having good channel condition. Also we perform rate adaption along with multiple packet transmission for better throughput. By allowing for dynamically monitored, multiple packet transmission and adaptation to changes in channel quality by adjusting the packet transmission rates according to certain optimization criteria improvements in performance can be obtained. The proposed method is the detection of channel congestion by measuring the fluctuation of signal to the standard deviation of and the detection of packet loss before channel performance diminishes. We will show that the use of such techniques in WMN can significantly improve performance. The effectiveness of the proposed method is presented in an experimental wireless network testbed via packet-level simulation. Our simulation results show that regardless of the channel condition we were to improve the performance in the throughput.
Resumo:
As wireless network technologies evolve towards an All-IP framework, Next Generation Wireless Communication Devices demand better use of spectral resources by employing advanced techniques of silence suppression. This paper presents an analysis of VoIP call data and compares the statistical results based on observed patterns of talk spurts and silence lengths to those achieved by a modified on-off voice model for silence suppression in wireless networks. As talk spurts and silence lengths are sensitive to varying word lengths, temporal structure and other prosodic aspects of speech, the impact of the use of various languages, dialects and gender of speakers on these results is also assessed.
Resumo:
Automatically generating maps of a measured variable of interest can be problematic. In this work we focus on the monitoring network context where observations are collected and reported by a network of sensors, and are then transformed into interpolated maps for use in decision making. Using traditional geostatistical methods, estimating the covariance structure of data collected in an emergency situation can be difficult. Variogram determination, whether by method-of-moment estimators or by maximum likelihood, is very sensitive to extreme values. Even when a monitoring network is in a routine mode of operation, sensors can sporadically malfunction and report extreme values. If this extreme data destabilises the model, causing the covariance structure of the observed data to be incorrectly estimated, the generated maps will be of little value, and the uncertainty estimates in particular will be misleading. Marchant and Lark [2007] propose a REML estimator for the covariance, which is shown to work on small data sets with a manual selection of the damping parameter in the robust likelihood. We show how this can be extended to allow treatment of large data sets together with an automated approach to all parameter estimation. The projected process kriging framework of Ingram et al. [2007] is extended to allow the use of robust likelihood functions, including the two component Gaussian and the Huber function. We show how our algorithm is further refined to reduce the computational complexity while at the same time minimising any loss of information. To show the benefits of this method, we use data collected from radiation monitoring networks across Europe. We compare our results to those obtained from traditional kriging methodologies and include comparisons with Box-Cox transformations of the data. We discuss the issue of whether to treat or ignore extreme values, making the distinction between the robust methods which ignore outliers and transformation methods which treat them as part of the (transformed) process. Using a case study, based on an extreme radiological events over a large area, we show how radiation data collected from monitoring networks can be analysed automatically and then used to generate reliable maps to inform decision making. We show the limitations of the methods and discuss potential extensions to remedy these.