970 resultados para Radio-Base Station


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A crescente utilização dos serviços de telecomunicações principalmente sem fio tem exigido a adoção de novos padrões de redes que ofereçam altas taxas de transmissão e que alcance um número maior de usuários. Neste sentido o padrão IEEE 802.16, no qual é baseado o WiMAX, surge como uma tecnologia em potencial para o fornecimento de banda larga na próxima geração de redes sem fio, principalmente porque oferece Qualidade de Serviço (QoS) nativamente para fluxos de voz, dados e vídeo. A respeito das aplicações baseadas vídeo, tem ocorrido um grande crescimento nos últimos anos. Em 2011 a previsão é que esse tipo de conteúdo ultrapasse 50% de todo tráfego proveniente de dispositivos móveis. Aplicações do tipo vídeo têm um forte apelo ao usuário final que é quem de fato deve ser o avaliador do nível de qualidade recebida. Diante disso, são necessárias novas formas de avaliação de desempenho que levem em consideração a percepção do usuário, complementando assim as técnicas tradicionais que se baseiam apenas em aspectos de rede (QoS). Nesse sentido, surgiu a avaliação de desempenho baseada Qualidade de Experiência (QoE) onde a avaliação do usuário final em detrimento a aplicação é o principal parâmetro mensurado. Os resultados das investigações em QoE podem ser usados como uma extensão em detrimento aos tradicionais métodos de QoS, e ao mesmo tempo fornecer informações a respeito da entrega de serviços multimídias do ponto de vista do usuário. Exemplos de mecanismos de controle que poderão ser incluídos em redes com suporte a QoE são novas abordagens de roteamento, processo de seleção de estação base e tráfego condicionado. Ambas as metodologias de avaliação são complementares, e se usadas de forma combinada podem gerar uma avaliação mais robusta. Porém, a grande quantidade de informações dificulta essa combinação. Nesse contexto, esta dissertação tem como objetivo principal criar uma metodologia de predição de qualidade de vídeo em redes WiMAX com uso combinado de simulações e técnicas de Inteligência Computacional (IC). A partir de parâmetros de QoS e QoE obtidos através das simulações será realizado a predição do comportamento futuro do vídeo com uso de Redes Neurais Artificiais (RNA). Se por um lado o uso de simulações permite uma gama de opções como extrapolação de cenários de modo a imitar as mesmas situações do mundo real, as técnicas de IC permitem agilizar a análise dos resultados de modo que sejam feitos previsões de um comportamento futuro, correlações e outros. No caso deste trabalho, optou-se pelo uso de RNAs uma vez que é a técnica mais utilizada para previsão do comportamento, como está sendo proposto nesta dissertação.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O avanço nas áreas de comunicação sem fio e microeletrônica permite o desenvolvimento de equipamentos micro sensores com capacidade de monitorar grandes regiões. Formadas por milhares de nós sensores, trabalhando de forma colaborativa, as Redes de Sensores sem Fio apresentam severas restrições de energia, devido à capacidade limitada das baterias dos nós que compõem a rede. O consumo de energia pode ser minimizado, permitindo que apenas alguns nós especiais, chamados de Cluster Head, sejam responsáveis por receber os dados dos nós que formam seu cluster e propagar estes dados para um ponto de coleta denominado Estação Base. A escolha do Cluster Head ideal influencia no aumento do período de estabilidade da rede, maximizando seu tempo de vida útil. A proposta, apresentada nesta dissertação, utiliza Lógica Fuzzy e algoritmo k-means com base em informações centralizadas na Estação Base para eleição do Cluster Head ideal em Redes de Sensores sem Fio heterogêneas. Os critérios usados para seleção do Cluster Head são baseados na centralidade do nó, nível de energia e proximidade para a Estação Base. Esta dissertação apresenta as desvantagens de utilização de informações locais para eleição do líder do cluster e a importância do tratamento discriminatório sobre as discrepâncias energéticas dos nós que formam a rede. Esta proposta é comparada com os algoritmos Low Energy Adaptative Clustering Hierarchy (LEACH) e Distributed energy-efficient clustering algorithm for heterogeneous Wireless sensor networks (DEEC). Esta comparação é feita, utilizando o final do período de estabilidade, como também, o tempo de vida útil da rede.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Currently the mobile services represent an essential tool in daily life of the population. However, while offering greater convenience to its users, there is growing concern about the harmful effects to human health, derived from daily exposure of the public to electromagnetic fields from radio base stations (RBS), since even today, there is no study proving that longterm exposure to low-level fields are not harmful to health. In Presidente Prudente has not been a study reporting values of measurements of electromagnetic fields from base stations installed in the city. Based on these data, this study aimed to assess the levels of electromagnetic exposure in the city of Presidente Prudente regarding recommended by international bodies, as well as propose measures that can reduce public exposure to electromagnetic fields. For measuring values of electromagnetic fields, we used appliance Electromagnetic Field Meter Portable Digital - DRE-050, the Instrutherm, following the methodology suggested and adapted from the Adilza Condessa Dode’study. In total, 49 points were mapped corresponding to the areas at risk of exposure to electromagnetic fields generated by the substations of power grid, transmission towers and telecommunication towers located in the city of Presidente Prudente (SP)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Computação - IBILCE

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The extension project "Rádio e TV Escola: capacitação para o uso de canais de comunicação na comunidade escolar" was created in 2006 to enable the introduction of radio and TV stations into two public schools. Supported by Social Communication’s courses of Universidade Sagrado Coração, the project’s goals were: a) collaborate with teachers in developing student’s literacy in writing and interpreting media texts; b) promote a greater awareness among graduate students of the social responsibility of media.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Laurentide glaciation during the early Pleistocene (~970 ka) dammed the southeast-flowing West Branch of the Susquehanna River (WBSR), scouring bedrock and creating 100-km-long glacial Lake Lesley near the Great Bend at Muncy, Pennsylvania (Ramage et al., 1998). Local drill logs and well data indicate that subsequent paleo-outwash floods and modern fluvial processes have deposited as much as 30 meters of alluvium in this area, but little is known about the valley fill architecture and the bedrock-alluvium interface. By gaining a greater understanding of the bedrock-alluvium interface the project will not only supplement existing depth to bedrock information, but also provide information pertinent to the evolution of the Muncy Valley landscape. This project determined if variations in the thickness of the valley fill were detectable using micro-gravity techniques to map the bedrock-alluvium interface. The gravity method was deemed appropriate due to scale of the study area (~30 km2), ease of operation by a single person, and the available geophysical equipment. A LaCoste and Romberg Gravitron unit was used to collect gravitational field readings at 49 locations over 5 transects across the Muncy Creek and Susquehanna River valleys (approximately 30 km2), with at least two gravity base stations per transect. Precise latitude, longitude and ground surface elevation at each location were measured using an OPUS corrected Trimble RTK-GPS unit. Base stations were chosen based on ease of access due to the necessity of repeat measurements. Gravity measurement locations were selected and marked to provide easy access and repeat measurements. The gravimeter was returned to a base station within every two hours and a looping procedure was used to determine drift and maximize confidence in the gravity measurements. A two-minute calibration reading at each station was used to minimize any tares in the data. The Gravitron digitally recorded finite impulse response filtered gravity measurements every 20 seconds at each station. A measurement period of 15 minutes was used for each base station occupation and a minimum of 5 minutes at all other locations. Longer or multiple measurements were utilized at some sites if drift or other externalities (i.e. train or truck traffic) were effecting readings. Average, median, standard deviation and 95% confidence interval were calculated for each station. Tidal, drift, latitude, free-air, Bouguer and terrain corrections were then applied. The results show that the gravitational field decreases as alluvium thickness increases across the axes of the Susquehanna River and Muncy Creek valleys. However, the location of the gravity low does not correspond with the present-day location of the West Branch of the Susquehanna River (WBSR), suggesting that the WBSR may have been constrained along Bald Eagle Mountain by a glacial lobe originating from the Muncy Creek Valley to the northeast. Using a 3-D inversion model, the topography of the bedrock-alluvium interface was determined over the extent of the study area using a density contrast of -0.8 g/cm3. Our results are consistent with the bedrock geometry of the area, and provide a low-cost, non-invasive and efficient method for exploring the subsurface and for supplementing existing well data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article is a systematic review of whether everyday exposure to radiofrequency electromagnetic field (RF-EMF) causes symptoms, and whether some individuals are able to detect low-level RF-EMF (below the ICNIRP [International Commission on Non-Ionizing Radiation Protection] guidelines). Peer-reviewed articles published before August 2007 were identified by means of a systematic literature search. Meta-analytic techniques were used to pool the results from studies investigating the ability to discriminate active from sham RF-EMF exposure. RF-EMF discrimination was investigated in seven studies including a total of 182 self-declared electromagnetic hypersensitive (EHS) individuals and 332 non-EHS individuals. The pooled correct field detection rate was 4.2% better than expected by chance (95% CI: -2.1 to 10.5). There was no evidence that EHS individuals could detect presence or absence of RF-EMF better than other persons. There was little evidence that short-term exposure to a mobile phone or base station causes symptoms based on the results of eight randomized trials investigating 194 EHS and 346 non-EHS individuals in a laboratory. Some of the trials provided evidence for the occurrence of nocebo effects. In population based studies an association between symptoms and exposure to RF-EMF in the everyday environment was repeatedly observed. This review showed that the large majority of individuals who claims to be able to detect low level RF-EMF are not able to do so under double-blind conditions. If such individuals exist, they represent a small minority and have not been identified yet. The available observational studies do not allow differentiating between biophysical from EMF and nocebo effects.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Stubacher Sonnblickkees (SSK) is located in the Hohe Tauern Range (Eastern Alps) in the south of Salzburg Province (Austria) in the region of Oberpinzgau in the upper Stubach Valley. The glacier is situated at the main Alpine crest and faces east, starting at elevations close to 3050 m and in the 1980s terminated at 2500 m a.s.l. It had an area of 1.7 km² at that time, compared with 1 km² in 2013. The glacier type can be classified as a slope glacier, i.e. the relief is covered by a relatively thin ice sheet and there is no regular glacier tongue. The rough subglacial topography makes for a complex shape in the surface topography, with various concave and convex patterns. The main reason for selecting this glacier for mass balance observations (as early as 1963) was to verify on a complex glacier how the mass balance methods and the conclusions - derived during the more or less pioneer phase of glaciological investigations in the 1950s and 1960s - could be applied to the SSK glacier. The decision was influenced by the fact that close to the SSK there was the Rudolfshütte, a hostel of the Austrian Alpine Club (OeAV), newly constructed in the 1950s to replace the old hut dating from 1874. The new Alpenhotel Rudolfshütte, which was run by the Slupetzky family from 1958 to 1970, was the base station for the long-term observation; the cable car to Rudolfshütte, operated by the Austrian Federal Railways (ÖBB), was a logistic advantage. Another factor for choosing SSK as a glaciological research site was the availability of discharge records of the catchment area from the Austrian Federal Railways who had turned the nearby lake Weißsee ('White Lake') - a former natural lake - into a reservoir for their hydroelectric power plants. In terms of regional climatic differences between the Central Alps in Tyrol and those of the Hohe Tauern, the latter experienced significantly higher precipitation , so one could expect new insights in the different response of the two glaciers SSK and Hintereisferner (Ötztal Alps) - where a mass balance series went back to 1952. In 1966 another mass balance series with an additional focus on runoff recordings was initiated at Vernagtfener, near Hintereisferner, by the Commission of the Bavarian Academy of Sciences in Munich. The usual and necessary link to climate and climate change was given by a newly founded weather station (by Heinz and Werner Slupetzky) at the Rudolfshütte in 1961, which ran until 1967. Along with an extension and enlargement to the so-called Alpine Center Rudolfshütte of the OeAV, a climate observatory (suggested by Heinz Slupetzky) has been operating without interruption since 1980 under the responsibility of ZAMG and the Hydrological Service of Salzburg, providing long-term met observations. The weather station is supported by the Berghotel Rudolfshütte (in 2004 the OeAV sold the hotel to a private owner) with accommodation and facilities. Direct yearly mass balance measurements were started in 1963, first for 3 years as part of a thesis project. In 1965 the project was incorporated into the Austrian glacier measurement sites within the International Hydrological Decade (IHD) 1965 - 1974 and was afterwards extended via the International Hydrological Program (IHP) 1975 - 1981. During both periods the main financial support came from the Hydrological Survey of Austria. After 1981 funds were provided by the Hydrological Service of the Federal Government of Salzburg. The research was conducted from 1965 onwards by Heinz Slupetzky from the (former) Department of Geography of the University of Salzburg. These activities received better recognition when the High Alpine Research Station of the University of Salzburg was founded in 1982 and brought in additional funding from the University. With recent changes concerning Rudolfshütte, however, it became unfeasible to keep the research station going. Fortunately, at least the weather station at Rudolfshütte is still operating. In the pioneer years of the mass balance recordings at SSK, the main goal was to understand the influence of the complicated topography on the ablation and accumulation processes. With frequent strong southerly winds (foehn) on the one hand, and precipitation coming in with storms from the north to northwest, the snow drift is an important factor on the undulating glacier surface. This results in less snow cover in convex zones and in more or a maximum accumulation in concave or flat areas. As a consequence of the accentuated topography, certain characteristic ablation and accumulation patterns can be observed during the summer season every year, which have been regularly observed for many decades . The process of snow depletion (Ausaperung) runs through a series of stages (described by the AAR) every year. The sequence of stages until the end of the ablation season depends on the weather conditions in a balance year. One needs a strong negative mass balance year at the beginning of glacier measurements to find out the regularities; 1965, the second year of observation resulted in a very positive mass balance with very little ablation but heavy accumulation. To date it is the year with the absolute maximum positive balance in the entire mass balance series since 1959, probably since 1950. The highly complex ablation patterns required a high number of ablation stakes at the beginning of the research and it took several years to develop a clearer idea of the necessary density of measurement points to ensure high accuracy. A great number of snow pits and probing profiles (and additional measurements at crevasses) were necessary to map the accumulation area/patterns. Mapping the snow depletion, especially at the end of the ablation season, which coincides with the equilibrium line, is one of the main basic data for drawing contour lines of mass balance and to calculate the total mass balance (on a regular-shaped valley glacier there might be an equilibrium line following a contour line of elevation separating the accumulation area and the ablation area, but not at SSK). - An example: in 1969/70, 54 ablation stakes and 22 snow pits were used on the 1.77 km² glacier surface. In the course of the study the consistency of the accumulation and ablation patterns could be used to reduce the number of measurement points. - At the SSK the stratigraphic system, i.e. the natural balance year, is used instead the usual hydrological year. From 1964 to 1981, the yearly mass balance was calculated by direct measurements. Based on these records of 17 years, a regression analysis between the specific net mass balance and the ratio of ablation area to total area (AAR) has been used since then. The basic requirement was mapping the maximum snow depletion at the end of each balance year. There was the advantage of Heinz Slupetzky's detailed local and long-term experience, which ensured homogeneity of the series on individual influences of the mass balance calculations. Verifications took place as often as possible by means of independent geodetic methods, i.e. monoplotting , aerial and terrestrial photogrammetry, more recently also the application of PHOTOMODELLER and laser scans. The semi-direct mass balance determinations used at SSK were tentatively compared with data from periods of mass/volume change, resulting in promising first results on the reliability of the method. In recent years re-analyses of the mass balance series have been conducted by the World Glacier Monitoring Service and will be done at SSK too. - The methods developed at SSK also add to another objective, much discussed in the 1960s within the community, namely to achieve time- and labour-saving methods to ensure continuation of long-term mass balance series. The regression relations were used to extrapolate the mass balance series back to 1959, the maximum depletion could be reconstructed by means of photographs for those years. R. Günther (1982) calculated the mass balance series of SSK back to 1950 by analysing the correlation between meteorological data and the mass balance; he found a high statistical relation between measured and determined mass balance figures for SSK. In spite of the complex glacier topography, interesting empirical experiences were gained from the mass balance data sets, giving a better understanding of the characteristics of the glacier type, mass balance and mass exchange. It turned out that there are distinct relations between the specific net balance, net accumulation (defined as Bc/S) and net ablation (Ba/S) to the AAR, resulting in characteristic so-called 'turnover curves'. The diagram of SSK represents the type of a glacier without a glacier tongue. Between 1964 and 1966, a basic method was developed, starting from the idea that instead of measuring years to cover the range between extreme positive and extreme negative yearly balances one could record the AAR/snow depletion/Ausaperung during one or two summers. The new method was applied on Cathedral Massif Glacier, a cirque glacier with the same area as the Stubacher Sonnblickkees, in British Columbia, Canada. during the summers of 1977 and 1978. It returned exactly the expected relations, e.g. mass turnover curves, as found on SSK. The SSK was mapped several times on a scale of 1:5000 to 1:10000. Length variations have been measured since 1960 within the OeAV glacier length measurement programme. Between 1965 and 1981, there was a mass gain of 10 million cubic metres. With a time lag of 10 years, this resulted in an advance until the mid-1980s. Since 1982 there has been a distinct mass loss of 35 million cubic metres by 2013. In recent years, the glacier has disintegrated faster, forced by the formation of a periglacial lake at the glacier terminus and also by the outcrops of rocks (typical for the slope glacier type), which have accelerated the meltdown. The formation of this lake is well documented. The glacier has retreated by some 600 m since 1981. - Since August 2002, a runoff gauge installed by the Hydrographical Service of Salzburg has recorded the discharge of the main part of SSK at the outlet of the new Unterer Eisboden See. The annual reports - submitted from 1982 on as a contractual obligation to the Hydrological Service of Salzburg - document the ongoing processes on the one hand, and emphasize the mass balance of SSK and outline the climatological reasons, mainly based on the met-data of the observatory Rudolfshütte, on the other. There is an additional focus on estimating the annual water balance in the catchment area of the lake. There are certain preconditions for the water balance equation in the area. Runoff is recorded by the ÖBB power stations, the mass balance of the now approx. 20% glaciated area (mainly the Sonnblickkees) is measured andthe change of the snow and firn patches/the water content is estimated as well as possible. (Nowadays laserscanning and ground radar are available to measure the snow pack). There is a net of three precipitation gauges plus the recordings at Rudolfshütte. The evaporation is of minor importance. The long-term annual mean runoff depth in the catchment area is around 3.000 mm/year. The precipitation gauges have measured deficits between 10% and 35%, on average probably 25% to 30%. That means that the real precipitation in the catchment area Weißsee (at elevations between 2,250 and 3,000 m) is in an order of 3,200 to 3,400 mm a year. The mass balance record of SSK was the first one established in the Hohe Tauern region (and now since the Hohe Tauern National Park was founded in 1983 in Salzburg) and is one of the longest measurement series worldwide. Great efforts are under way to continue the series, to safeguard against interruption and to guarantee a long-term monitoring of the mass balance and volume change of SSK (until the glacier is completely gone, which seems to be realistic in the near future as a result of the ongoing global warming). Heinz Slupetzky, March 2014

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper discusses the target localization problem of wireless visual sensor networks. Specifically, each node with a low-resolution camera extracts multiple feature points to represent the target at the sensor node level. A statistical method of merging the position information of different sensor nodes to select the most correlated feature point pair at the base station is presented. This method releases the influence of the accuracy of target extraction on the accuracy of target localization in universal coordinate system. Simulations show that, compared with other relative approach, our proposed method can generate more desirable target localization's accuracy, and it has a better trade-off between camera node usage and localization accuracy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this paper is to evaluate the performance of mobile WiMAX technology for users in a highly mobility scenario for an operating frequency of 3.5 GHz. By utilizing a modified version of the extended Erceg’s propagation model, based on the introduction of Rayleigh fading due to multipath, we have calculated the received desired power and the interference power to obtain the statistical signal-to-interference-plusnoise rate (as a function of mean value and variance of cochannel interference) and the user throughput. A rural scenario composed of a transmitting base station and users in moving vehicles along a cell sector is proposed. The obtained results about coverage and throughput have been simulated by a tool built in MATLAB

Relevância:

80.00% 80.00%

Publicador:

Resumo:

After the extraordinary spread of the World Wide Web during the last fifteen years, engineers and developers are pushing now the Internet to its next border. A new conception in computer science and networks communication has been burgeoning during roughly the last decade: a world where most of the computers of the future will be extremely downsized, to the point that they will look like dust at its most advanced prototypes. In this vision, every single element of our “real” world has an intelligent tag that carries all their relevant data, effectively mapping the “real” world into a “virtual” one, where all the electronically augmented objects are present, can interact among them and influence with their behaviour that of the other objects, or even the behaviour of a final human user. This is the vision of the Internet of the Future, which also draws ideas of several novel tendencies in computer science and networking, as pervasive computing and the Internet of Things. As it has happened before, materializing a new paradigm that changes the way entities interrelate in this new environment has proved to be a goal full of challenges in the way. Right now the situation is exciting, with a plethora of new developments, proposals and models sprouting every time, often in an uncoordinated, decentralised manner away from any standardization, resembling somehow the status quo of the first developments of advanced computer networking, back in the 60s and the 70s. Usually, a system designed after the Internet of the Future will consist of one or several final user devices attached to these final users, a network –often a Wireless Sensor Network- charged with the task of collecting data for the final user devices, and sometimes a base station sending the data for its further processing to less hardware-constrained computers. When implementing a system designed with the Internet of the Future as a pattern, issues, and more specifically, limitations, that must be faced are numerous: lack of standards for platforms and protocols, processing bottlenecks, low battery lifetime, etc. One of the main objectives of this project is presenting a functional model of how a system based on the paradigms linked to the Internet of the Future works, overcoming some of the difficulties that can be expected and showing a model for a middleware architecture specifically designed for a pervasive, ubiquitous system. This Final Degree Dissertation is divided into several parts. Beginning with an Introduction to the main topics and concepts of this new model, a State of the Art is offered so as to provide a technological background. After that, an example of a semantic and service-oriented middleware is shown; later, a system built by means of this semantic and service-oriented middleware, and other components, is developed, justifying its placement in a particular scenario, describing it and analysing the data obtained from it. Finally, the conclusions inferred from this system and future works that would be good to be tackled are mentioned as well. RESUMEN Tras el extraordinario desarrollo de la Web durante los últimos quince años, ingenieros y desarrolladores empujan Internet hacia su siguiente frontera. Una nueva concepción en la computación y la comunicación a través de las redes ha estado floreciendo durante la última década; un mundo donde la mayoría de los ordenadores del futuro serán extremadamente reducidas de tamaño, hasta el punto que parecerán polvo en sus más avanzado prototipos. En esta visión, cada uno de los elementos de nuestro mundo “real” tiene una etiqueta inteligente que porta sus datos relevantes, mapeando de manera efectiva el mundo “real” en uno “virtual”, donde todos los objetos electrónicamente aumentados están presentes, pueden interactuar entre ellos e influenciar con su comportamiento el de los otros, o incluso el comportamiento del usuario final humano. Ésta es la visión del Internet del Futuro, que también toma ideas de varias tendencias nuevas en las ciencias de la computación y las redes de ordenadores, como la computación omnipresente y el Internet de las Cosas. Como ha sucedido antes, materializar un nuevo paradigma que cambia la manera en que las entidades se interrelacionan en este nuevo entorno ha demostrado ser una meta llena de retos en el camino. Ahora mismo la situación es emocionante, con una plétora de nuevos desarrollos, propuestas y modelos brotando todo el rato, a menudo de una manera descoordinada y descentralizada lejos de cualquier estandarización, recordando de alguna manera el estado de cosas de los primeros desarrollos de redes de ordenadores avanzadas, allá por los años 60 y 70. Normalmente, un sistema diseñado con el Internet del futuro como modelo consistirá en uno o varios dispositivos para usuario final sujetos a estos usuarios finales, una red –a menudo, una red de sensores inalámbricos- encargada de recolectar datos para los dispositivos de usuario final, y a veces una estación base enviando los datos para su consiguiente procesado en ordenadores menos limitados en hardware. Al implementar un sistema diseñado con el Internet del futuro como patrón, los problemas, y más específicamente, las limitaciones que deben enfrentarse son numerosas: falta de estándares para plataformas y protocolos, cuellos de botella en el procesado, bajo tiempo de vida de las baterías, etc. Uno de los principales objetivos de este Proyecto Fin de Carrera es presentar un modelo funcional de cómo trabaja un sistema basado en los paradigmas relacionados al Internet del futuro, superando algunas de las dificultades que pueden esperarse y mostrando un modelo de una arquitectura middleware específicamente diseñado para un sistema omnipresente y ubicuo. Este Proyecto Fin de Carrera está dividido en varias partes. Empezando por una introducción a los principales temas y conceptos de este modelo, un estado del arte es ofrecido para proveer un trasfondo tecnológico. Después de eso, se muestra un ejemplo de middleware semántico orientado a servicios; después, se desarrolla un sistema construido por medio de este middleware semántico orientado a servicios, justificando su localización en un escenario particular, describiéndolo y analizando los datos obtenidos de él. Finalmente, las conclusiones extraídas de este sistema y las futuras tareas que sería bueno tratar también son mencionadas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the target’s position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is al¬ways necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the re¬ceiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the network’s lifetime is significantly improved. Resumen La proliferación de las redes inalámbricas de sensores junto con la gran variedad de posi¬bles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor interés entre la comunidad científica es la de localization, donde el conjunto de nodos de la red intenta estimar la posición de un blanco localizado dentro de su área de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energía de la seal recibida (RSSI por sus siglas en inglés) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de señal recibida no sigue una relación lineal con la posición del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partículas, mientas que en otras se basan en esquemas mucho más simples pero con menor precisión. Además, en muchos casos las estrategias son centralizadas lo que resulta poco prácticos para su implementación en redes de sensores. Desde un punto de vista práctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisión. En esta línea, en lugar de abordar directamente el problema de la estimación de la posición del blanco bajo el criterio de máxima verosimilitud, proponemos usar una formulación subóptima del problema más manejable analíticamente y que ofrece la ventaja de permitir en¬contrar la solución al problema de localization de una forma totalmente distribuida, convirtiéndola así en una solución atractiva dentro del contexto de redes inalámbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algorit¬mos de consenso y de optimización convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisión se propone una estrategia que con¬siste en la optimización local de la función de verosimilitud entorno a la estimación inicialmente obtenida. Esta optimización se puede realizar de forma descentralizada usando una versión basada en consenso del método de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicación subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos “sumidero”, (sink en inglés) que actúen como centros recolectores de información y que estarán equipados con hardware adicional que les permita la interacción con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a tráfico y capacidad de cálculo. Como alter¬nativa se pueden usar técnicas cooperativas de conformación de haz (beamforming en inglés) de manera que el conjunto de la red puede verse como un único sistema virtual de múltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comu¬nicaciones con múltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el recep¬tor. No obstante, las actuales técnicas se basan en resultados promedios y asintóticos, cuando el número de nodos es muy grande. Para una configuración específica se pierde el control sobre el diagrama de radiación causando posibles interferencias sobre sis¬temas coexistentes o gastando más potencia de la requerida. La eficiencia energética es una cuestión capital en las redes inalámbricas de sensores ya que los nodos están equipados con baterías. Es por tanto muy importante preservar la batería evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformación de haz que maximice el tiempo de vida útil de la red, entendiendo como tal el máximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en inglés) que permitan una decodificación fiable de la señal recibida en la estación base. Se proponen además algoritmos distribuidos que convergen a la solución centralizada. Inicialmente se considera que la única causa de consumo energético se debe a las comunicaciones con la estación base. Este modelo de consumo energético es modificado para tener en cuenta otras formas de consumo de energía derivadas de procesos inherentes al funcionamiento de la red como la adquisición y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energía se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilístico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energética.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As wireless sensor networks are usually deployed in unattended areas, security policies cannot be updated in a timely fashion upon identification of new attacks. This gives enough time for attackers to cause significant damage. Thus, it is of great importance to provide protection from unknown attacks. However, existing solutions are mostly concentrated on known attacks. On the other hand, mobility can make the sensor network more resilient to failures, reactive to events, and able to support disparate missions with a common set of sensors, yet the problem of security becomes more complicated. In order to address the issue of security in networks with mobile nodes, we propose a machine learning solution for anomaly detection along with the feature extraction process that tries to detect temporal and spatial inconsistencies in the sequences of sensed values and the routing paths used to forward these values to the base station. We also propose a special way to treat mobile nodes, which is the main novelty of this work. The data produced in the presence of an attacker are treated as outliers, and detected using clustering techniques. These techniques are further coupled with a reputation system, in this way isolating compromised nodes in timely fashion. The proposal exhibits good performances at detecting and confining previously unseen attacks, including the cases when mobile nodes are compromised.