994 resultados para RADIATION SENSOR
Resumo:
Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the target’s position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is al¬ways necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the re¬ceiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the network’s lifetime is significantly improved. Resumen La proliferación de las redes inalámbricas de sensores junto con la gran variedad de posi¬bles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor interés entre la comunidad científica es la de localization, donde el conjunto de nodos de la red intenta estimar la posición de un blanco localizado dentro de su área de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energía de la seal recibida (RSSI por sus siglas en inglés) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de señal recibida no sigue una relación lineal con la posición del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partículas, mientas que en otras se basan en esquemas mucho más simples pero con menor precisión. Además, en muchos casos las estrategias son centralizadas lo que resulta poco prácticos para su implementación en redes de sensores. Desde un punto de vista práctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisión. En esta línea, en lugar de abordar directamente el problema de la estimación de la posición del blanco bajo el criterio de máxima verosimilitud, proponemos usar una formulación subóptima del problema más manejable analíticamente y que ofrece la ventaja de permitir en¬contrar la solución al problema de localization de una forma totalmente distribuida, convirtiéndola así en una solución atractiva dentro del contexto de redes inalámbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algorit¬mos de consenso y de optimización convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisión se propone una estrategia que con¬siste en la optimización local de la función de verosimilitud entorno a la estimación inicialmente obtenida. Esta optimización se puede realizar de forma descentralizada usando una versión basada en consenso del método de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicación subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos “sumidero”, (sink en inglés) que actúen como centros recolectores de información y que estarán equipados con hardware adicional que les permita la interacción con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a tráfico y capacidad de cálculo. Como alter¬nativa se pueden usar técnicas cooperativas de conformación de haz (beamforming en inglés) de manera que el conjunto de la red puede verse como un único sistema virtual de múltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comu¬nicaciones con múltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el recep¬tor. No obstante, las actuales técnicas se basan en resultados promedios y asintóticos, cuando el número de nodos es muy grande. Para una configuración específica se pierde el control sobre el diagrama de radiación causando posibles interferencias sobre sis¬temas coexistentes o gastando más potencia de la requerida. La eficiencia energética es una cuestión capital en las redes inalámbricas de sensores ya que los nodos están equipados con baterías. Es por tanto muy importante preservar la batería evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformación de haz que maximice el tiempo de vida útil de la red, entendiendo como tal el máximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en inglés) que permitan una decodificación fiable de la señal recibida en la estación base. Se proponen además algoritmos distribuidos que convergen a la solución centralizada. Inicialmente se considera que la única causa de consumo energético se debe a las comunicaciones con la estación base. Este modelo de consumo energético es modificado para tener en cuenta otras formas de consumo de energía derivadas de procesos inherentes al funcionamiento de la red como la adquisición y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energía se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilístico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energética.
Resumo:
Laser Diodes have been employed many times as light sources on different kinds of optical sensors. Their main function in these applications was the emission of an optical radiation impinging onto a certain object and, according to the characteristics of the reflected light, some information about this object was obtained. Laser diodes were acting, in a certain way, just as passive devices where their only function was to provide the adequate radiation to be later measured and analyzed. The objective of this paper is to report a new concept on the use of laser diodes taking into account their optical bistable properties. As it has been shown in several places, different laser diodes as, for example, DFB lasers and FP lasers, offer bistable characteristics being these characteristics a function of different parameters as wavelength, light polarization or temperature. Laser Bistability is strongly dependent on them and any small variation of above parameters gives rise to a strong change in the characteristics of its non-linear properties. These variations are analyzed and their application in sensing reported. The dependence on wavelength, spectral width, input power and phase variations, mainly for a Fabry-Perot Laser structure as basic configuration, is shown in this paper.
Resumo:
Total Ionization Dose (TID) is traditionally measured by radiation sensitive FETs (RADFETs) that require a radiation hardened Analog-to-Digital Converter (ADC) stage. This work introduces a TID sensor based on a delay path whose propagation time is sensitive to the absorbed radiation. It presents the following advantages: it is a digital sensor able to be integrated in CMOS circuits and programmable systems such as FPGAs; it has a configurable sensitivity that allows to use this device for radiation doses ranging from very low to relatively high levels; its interface helps to integrate this sensor in a multidisciplinary sensor network; it is self-timed, hence it does not need a clock signal that can degrade its accuracy. The sensor has been prototyped in a 0.35μm technology, has an area of 0.047mm2, of which 22% is dedicated to measuring radiation, and an energy per conversion of 463pJ. Experimental irradiation tests have validated the correct response of the proposed TID sensor.
Resumo:
The ERS-1 satellite carries a scatterometer which measures the amount of radiation scattered back toward the satellite by the ocean's surface. These measurements can be used to infer wind vectors. The implementation of a neural network based forward model which maps wind vectors to radar backscatter is addressed. Input noise cannot be neglected. To account for this noise, a Bayesian framework is adopted. However, Markov Chain Monte Carlo sampling is too computationally expensive. Instead, gradient information is used with a non-linear optimisation algorithm to find the maximum em a posteriori probability values of the unknown variables. The resulting models are shown to compare well with the current operational model when visualised in the target space.
Resumo:
The ERS-1 satellite carries a scatterometer which measures the amount of radiation scattered back toward the satellite by the ocean's surface. These measurements can be used to infer wind vectors. The implementation of a neural network based forward model which maps wind vectors to radar backscatter is addressed. Input noise cannot be neglected. To account for this noise, a Bayesian framework is adopted. However, Markov Chain Monte Carlo sampling is too computationally expensive. Instead, gradient information is used with a non-linear optimisation algorithm to find the maximum em a posteriori probability values of the unknown variables. The resulting models are shown to compare well with the current operational model when visualised in the target space.
Resumo:
We present experimental results on the performance of a series of coated, D-shaped optical fiber sensors that display high spectral sensitivities to external refractive index. Sensitivity to the chosen index regime and coupling of the fiber core mode to the surface plasmon resonance (SPR) is enhanced by using specific materials as part of a multi-layered coating. We present strong evidence that this effect is enhanced by post ultraviolet radiation of the lamellar coating that results in the formation of a nano-scale surface relief corrugation structure, which generates an index perturbation within the fiber core that in turn enhances the coupling. We have found reasonable agreement when we modeling the fiber device. It was found that the SPR devices operate in air with high coupling efficiency in excess of 40 dB with spectral sensitivities that outperform a typical long period grating, with one device yielding a wavelength spectral sensitivity of 12000 nm/RIU in the important aqueous index regime. The devices generate SPRs over a very large wavelength range, (visible to 2 mu m) by alternating the polarization state of the illuminating light.
Resumo:
Vegetation changes, such as shrub encroachment and wetland expansion, have been observed in many Arctic tundra regions. These changes feed back to permafrost and climate. Permafrost can be protected by soil shading through vegetation as it reduces the amount of solar energy available for thawing. Regional climate can be affected by a reduction in surface albedo as more energy is available for atmospheric and soil heating. Here, we compared the shortwave radiation budget of two common Arctic tundra vegetation types dominated by dwarf shrubs (Betula nana) and wet sedges (Eriophorum angustifolium) in North-East Siberia. We measured time series of the shortwave and longwave radiation budget above the canopy and transmitted radiation below the canopy. Additionally, we quantified soil temperature and heat flux as well as active layer thickness. The mean growing season albedo of dwarf shrubs was 0.15 ± 0.01, for sedges it was higher (0.17 ± 0.02). Dwarf shrub transmittance was 0.36 ± 0.07 on average, and sedge transmittance was 0.28 ± 0.08. The standing dead leaves contributed strongly to the soil shading of wet sedges. Despite a lower albedo and less soil shading, the soil below dwarf shrubs conducted less heat resulting in a 17 cm shallower active layer as compared to sedges. This result was supported by additional, spatially distributed measurements of both vegetation types. Clouds were a major influencing factor for albedo and transmittance, particularly in sedge vegetation. Cloud cover reduced the albedo by 0.01 in dwarf shrubs and by 0.03 in sedges, while transmittance was increased by 0.08 and 0.10 in dwarf shrubs and sedges, respectively. Our results suggest that the observed deeper active layer below wet sedges is not primarily a result of the summer canopy radiation budget. Soil properties, such as soil albedo, moisture, and thermal conductivity, may be more influential, at least in our comparison between dwarf shrub vegetation on relatively dry patches and sedge vegetation with higher soil moisture.
Resumo:
Numerical modelling and simulations are needed to develop and test specific analysis methods by providing test data before BIRDY would be launched. This document describes the "satellite data simulator" which is a multi-sensor, multi-spectral satellite simulator produced especially for the BIRDY mission which could be used as well to analyse data from other satellite missions providing energetic particles data in the Solar system.