991 resultados para visible image sensor
Resumo:
This study includes the results of the analysis of areas susceptible to degradation by remote sensing in semi-arid region, which is a matter of concern and affects the whole population and the catalyst of this process occurs by the deforestation of the savanna and improper practices by the use of soil. The objective of this research is to use biophysical parameters of the MODIS / Terra and images TM/Landsat-5 to determine areas susceptible to degradation in semi-arid Paraiba. The study area is located in the central interior of Paraíba, in the sub-basin of the River Taperoá, with average annual rainfall below 400 mm and average annual temperature of 28 ° C. To draw up the map of vegetation were used TM/Landsat-5 images, specifically, the composition 5R4G3B colored, commonly used for mapping land use. This map was produced by unsupervised classification by maximum likelihood. The legend corresponds to the following targets: savanna vegetation sparse and dense, riparian vegetation and exposed soil. The biophysical parameters used in the MODIS were emissivity, albedo and vegetation index for NDVI (NDVI). The GIS computer programs used were Modis Reprojections Tools and System Information Processing Georeferenced (SPRING), which was set up and worked the bank of information from sensors MODIS and TM and ArcGIS software for making maps more customizable. Initially, we evaluated the behavior of the vegetation emissivity by adapting equation Bastiaanssen on NDVI for spatialize emissivity and observe changes during the year 2006. The albedo was used to view your percentage of increase in the periods December 2003 and 2004. The image sensor of Landsat TM were used for the month of December 2005, according to the availability of images and in periods of low emissivity. For these applications were made in language programs for GIS Algebraic Space (LEGAL), which is a routine programming SPRING, which allows you to perform various types of algebras of spatial data and maps. For the detection of areas susceptible to environmental degradation took into account the behavior of the emissivity of the savanna that showed seasonal coinciding with the rainy season, reaching a maximum emissivity in the months April to July and in the remaining months of a low emissivity . With the images of the albedo of December 2003 and 2004, it was verified the percentage increase, which allowed the generation of two distinct classes: areas with increased variation percentage of 1 to 11.6% and the percentage change in areas with less than 1 % albedo. It was then possible to generate the map of susceptibility to environmental degradation, with the intersection of the class of exposed soil with varying percentage of the albedo, resulting in classes susceptibility to environmental degradation
Resumo:
The silicon-based gate-controlled lateral bipolar junction transistor (BJT) is a controllable four-terminal photodetector with very high responsivity at low-light intensities. It is a hybrid device composed of a MOSFET, a lateral BJT, and a vertical BJT. Using sufficient gate bias to operate the MOS transistor in inversion mode, the photodetector allows for increasing the photocurrent gain by 106 at low light intensities when the base-emitter voltage is smaller than 0.4 V, and BJT is off. Two operation modes, with constant voltage bias between gate and emitter/source terminals and between gate and base/body terminals, allow for tuning the photoresponse from sublinear to slightly above linear, satisfying the application requirements for wide dynamic range, high-contrast, or linear imaging. MOSFETs from a standard 0.18-μm triple-well complementary-metal oxide semiconductor technology with a width to length ratio of 8 μm /2 μm and a total area of ∼ 500μm2 are used. When using this area, the responsivities are 16-20 kA/W. © 2001-2012 IEEE.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
O uso de novas técnicas para estudar a evolução e preenchimento de vales incisos tem fornecido, ao longo dos anos, importantes resultados para entendermos como foi a evolução costeira brasileira. Neste contexto, esta tese teve como objetivo estudar a evolução do estuário do rio Coreaú, localizado no estado do Ceará, em diferentes escalas temporais, seja “Eventual” (meses, anos), “Engenharia” anos, decádas) e Geológica” (centenas, séculos, milênios), proposta por Cowell et al. (2003), com intuíto de avaliar se as transformações/alterações ao longo dos anos foram significativas ou não. Como resultados, obteve-se no primeiro objetivo, utilizando técnicas de sensoriamento remoto, a partir de imagens dos sensores TM, ETM+ e OLI do satélite Landsat 5,7 e 8 e LISS-3 do satélite ResourceSat-1 de 1985 a 2013, uma alteração mínima em relação a transformações morfológicas ao longo do estuário nos últimos 28 anos (entre as escalas Eventual e de Engenharia), houve neste período um acréscimo de 0,236 km2 (3%) de área, não trazendo sigificativas mudanças para o estuário. Em relação a taxa de sedimentação, correspondente ao segundo bjetivo, a partir da coleta de 9 testemunhos, de até 1 m de profundidade e utilizando o radionuclídeo 210Pb, ao longo do estuário, obteve-se uma taxa que variou de 0,33 cm/ano a 1 cm/ano (escalas entre Engenharia e Geológica) próximo a foz do estuário, e com uma rápida sedimentação percebida na margem leste do rio, onde encontram-se sedimentos mais recentes em relação a margem oeste. Em relação ao preenchimento, terceiro e último objetivo, a partir da amostragem de testemunhos de até 18 m de profundidade, utilzando o amostrador Rammkernsonden (RKS), foram gerados perfis e seções estratigráficas que ajudaram a entender o preenchimento do vale inciso do estuário do rio Coreaú e entender que trata-se de um estuário fluvio-marinho, preenchendo os vales formados no Grupo Barreiras nos últimos 10.000 anos antes do presente. Estas análises e resultados servirão como base para comparação com outros estuários, sejam fluviais, fluvio-marinhos ou marinhos, para entendermos melhor quais os possíveis eventos que dominaram a sedimentação ao longo da costa brasileira em diferentes escalas.
Resumo:
The purpose of this research was to develop a working physical model of the focused plenoptic camera and develop software that can process the measured image intensity, reconstruct this into a full resolution image, and to develop a depth map from its corresponding rendered image. The plenoptic camera is a specialized imaging system designed to acquire spatial, angular, and depth information in a single intensity measurement. This camera can also computationally refocus an image by adjusting the patch size used to reconstruct the image. The published methods have been vague and conflicting, so the motivation behind this research is to decipher the work that has been done in order to develop a working proof-of-concept model. This thesis outlines the theory behind the plenoptic camera operation and shows how the measured intensity from the image sensor can be turned into a full resolution rendered image with its corresponding depth map. The depth map can be created by a cross-correlation of adjacent sub-images created by the microlenslet array (MLA.) The full resolution image reconstruction can be done by taking a patch from each MLA sub-image and piecing them together like a puzzle. The patch size determines what object plane will be in-focus. This thesis also goes through a very rigorous explanation of the design constraints involved with building a plenoptic camera. Plenoptic camera data from Adobe © was used to help with the development of the algorithms written to create a rendered image and its depth map. Finally, using the algorithms developed from these tests and the knowledge for developing the plenoptic camera, a working experimental system was built, which successfully generated a rendered image and its corresponding depth map.
Resumo:
La termografía es un método de inspección y diagnóstico basado en la radiación infrarroja que emiten los cuerpos. Permite medir dicha radiación a distancia y sin contacto, obteniendo un termograma o imagen termográfica, objeto de estudio de este proyecto. Todos los cuerpos que se encuentren a una cierta temperatura emiten radiación infrarroja. Sin embargo, para hacer una inspección termográfica hay que tener en cuenta la emisividad de los cuerpos, capacidad que tienen de emitir radiación, ya que ésta no sólo depende de la temperatura del cuerpo, sino también de sus características superficiales. Las herramientas necesarias para conseguir un termograma son principalmente una cámara termográfica y un software que permita su análisis. La cámara percibe la emisión infrarroja de un objeto y lo convierte en una imagen visible, originalmente monocromática. Sin embargo, después es coloreada por la propia cámara o por un software para una interpretación más fácil del termograma. Para obtener estas imágenes termográficas existen varias técnicas, que se diferencian en cómo la energía calorífica se transfiere al cuerpo. Estas técnicas se clasifican en termografía pasiva, activa y vibrotermografía. El método que se utiliza en cada caso depende de las características térmicas del cuerpo, del tipo de defecto a localizar o la resolución espacial de las imágenes, entre otros factores. Para analizar las imágenes y así obtener diagnósticos y detectar defectos, es importante la precisión. Por ello existe un procesado de las imágenes, para minimizar los efectos provocados por causas externas, mejorar la calidad de la imagen y extraer información de las inspecciones realizadas. La termografía es un método de ensayo no destructivo muy flexible y que ofrece muchas ventajas. Por esta razón el campo de aplicación es muy amplio, abarcando desde aplicaciones industriales hasta investigación y desarrollo. Vigilancia y seguridad, ahorro energético, medicina o medio ambiente, son algunos de los campos donde la termografía aportaimportantes beneficios. Este proyecto es un estudio teórico de la termografía, donde se describen detalladamente cada uno de los aspectos mencionados. Concluye con una aplicación práctica, creando una cámara infrarroja a partir de una webcam, y realizando un análisis de las imágenes obtenidas con ella. Con esto se demuestran algunas de las teorías explicadas, así como la posibilidad de reconocer objetos mediante la termografía. Thermography is a method of testing and diagnosis based on the infrared radiation emitted by bodies. It allows to measure this radiation from a distance and with no contact, getting a thermogram or thermal image, object of study of this project. All bodies that are at a certain temperature emit infrared radiation. However, making a thermographic inspection must take into account the emissivity of the body, capability of emitting radiation. This not only depends on the temperature of the body, but also on its surface characteristics. The tools needed to get a thermogram are mainly a thermal imaging camera and software that allows analysis. The camera sees the infrared emission of an object and converts it into a visible image, originally monochrome. However, after it is colored by the camera or software for easier interpretation of thermogram. To obtain these thermal images it exists various techniques, which differ in how heat energy is transferred to the body. These techniques are classified into passive thermography, active and vibrotermografy. The method used in each case depends on the thermal characteristics of the body, the type of defect to locate or spatial resolution of images, among other factors. To analyze the images and obtain diagnoses and defects, accuracy is important. Thus there is a image processing to minimize the effects caused by external causes, improving image quality and extract information from inspections. Thermography is a non-‐destructive test method very flexible and offers many advantages. So the scope is very wide, ranging from industrial applications to research and development.Surveillance and security, energy saving, environmental or medicine are some of the areas where thermography provides significant benefits. This project is a theoretical study of thermography, which describes in detail each of these aspects. It concludes with a practical application, creating an infrared camera from a webcam, and making an analysis of the images obtained with it. This will demonstrate some of the theories explained as well as the ability to recognize objects by thermography.
Resumo:
This paper presents an up to date review of digital watermarking (WM) from a VLSI designer point of view. The reader is introduced to basic principles and terms in the field of image watermarking. It goes through a brief survey on WM theory, laying out common classification criterions and discussing important design considerations and trade-offs. Elementary WM properties such as robustness, computational complexity and their influence on image quality are discussed. Common attacks and testing benchmarks are also briefly mentioned. It is shown that WM design must take the intended application into account. The difference between software and hardware implementations is explained through the introduction of a general scheme of a WM system and two examples from previous works. A versatile methodology to aid in a reliable and modular design process is suggested. Relating to mixed-signal VLSI design and testing, the proposed methodology allows an efficient development of a CMOS image sensor with WM capabilities.
Resumo:
A two terminal optically addressed image processing device based on two stacked sensing/switching p-i-n a-SiC:H diodes is presented. The charge packets are injected optically into the p-i-n sensing photodiode and confined at the illuminated regions changing locally the electrical field profile across the p-i-n switching diode. A red scanner is used for charge readout. The various design parameters and addressing architecture trade-offs are discussed. The influence on the transfer functions of an a-SiC:H sensing absorber optimized for red transmittance and blue collection or of a floating anode in between is analysed. Results show that the thin a-SiC:H sensing absorber confines the readout to the switching diode and filters the light allowing full colour detection at two appropriated voltages. When the floating anode is used the spectral response broadens, allowing B&W image recognition with improved light-to-dark sensitivity. A physical model supports the image and colour recognition process.
Resumo:
The need for more efficient illumination systems has led to the proliferation of Solid-State Lighting (SSL) systems, which offer optimized power consumption. SSL systems are comprised of LED devices which are intrinsically fast devices and permit very fast light modulation. This, along with the congestion of the radio frequency spectrum has paved the path for the emergence of Visible Light Communication (VLC) systems. VLC uses free space to convey information by using light modulation. Notwithstanding, as VLC systems proliferate and cost competitiveness ensues, there are two important aspects to be considered. State-of-the-art VLC implementations use power demanding PAs, and thus it is important to investigate if regular, existent Switched-Mode Power Supply (SMPS) circuits can be adapted for VLC use. A 28 W buck regulator was implemented using a off-the-shelf LED Driver integrated circuit, using both series and parallel dimming techniques. Results show that optical clock frequencies up to 500 kHz are achievable without any major modification besides adequate component sizing. The use of an LED as a sensor was investigated, in a short-range, low-data-rate perspective. Results show successful communication in an LED-to-LED configuration, with enhanced range when using LED strings as sensors. Besides, LEDs present spectral selective sensitivity, which makes them good contenders for a multi-colour LED-to-LED system, such as in the use of RGB displays and lamps. Ultimately, the present work shows evidence that LEDs can be used as a dual-purpose device, enabling not only illumination, but also bi-directional data communication.
Resumo:
Given the limitations of different types of remote sensing images, automated land-cover classifications of the Amazon várzea may yield poor accuracy indexes. One way to improve accuracy is through the combination of images from different sensors, by either image fusion or multi-sensor classifications. Therefore, the objective of this study was to determine which classification method is more efficient in improving land cover classification accuracies for the Amazon várzea and similar wetland environments - (a) synthetically fused optical and SAR images or (b) multi-sensor classification of paired SAR and optical images. Land cover classifications based on images from a single sensor (Landsat TM or Radarsat-2) are compared with multi-sensor and image fusion classifications. Object-based image analyses (OBIA) and the J.48 data-mining algorithm were used for automated classification, and classification accuracies were assessed using the kappa index of agreement and the recently proposed allocation and quantity disagreement measures. Overall, optical-based classifications had better accuracy than SAR-based classifications. Once both datasets were combined using the multi-sensor approach, there was a 2% decrease in allocation disagreement, as the method was able to overcome part of the limitations present in both images. Accuracy decreased when image fusion methods were used, however. We therefore concluded that the multi-sensor classification method is more appropriate for classifying land cover in the Amazon várzea.
Resumo:
Usual image fusion methods inject features from a high spatial resolution panchromatic sensor into every low spatial resolution multispectral band trying to preserve spectral signatures and improve spatial resolution to that of the panchromatic sensor. The objective is to obtain the image that would be observed by a sensor with the same spectral response (i.e., spectral sensitivity and quantum efficiency) as the multispectral sensors and the spatial resolution of the panchromatic sensor. But in these methods, features from electromagnetic spectrum regions not covered by multispectral sensors are injected into them, and physical spectral responses of the sensors are not considered during this process. This produces some undesirable effects, such as resolution overinjection images and slightly modified spectral signatures in some features. The authors present a technique which takes into account the physical electromagnetic spectrum responses of sensors during the fusion process, which produces images closer to the image obtained by the ideal sensor than those obtained by usual wavelet-based image fusion methods. This technique is used to define a new wavelet-based image fusion method.
Resumo:
Référence bibliographique : Rol, 60650
Resumo:
Tunable Optical Sensor Arrays (TOSA) based on Fabry-Pérot (FP) filters, for high quality spectroscopic applications in the visible and near infrared spectral range are investigated within this work. The optical performance of the FP filters is improved by using ion beam sputtered niobium pentoxide (Nb2O5) and silicon dioxide (SiO2) Distributed Bragg Reflectors (DBRs) as mirrors. Due to their high refractive index contrast, only a few alternating pairs of Nb2O5 and SiO2 films can achieve DBRs with high reflectivity in a wide spectral range, while ion beam sputter deposition (IBSD) is utilized due to its ability to produce films with high optical purity. However, IBSD films are highly stressed; resulting in stress induced mirror curvature and suspension bending in the free standing filter suspensions of the MEMS (Micro-Electro-Mechanical Systems) FP filters. Stress induced mirror curvature results in filter transmission line degradation, while suspension bending results in high required filter tuning voltages. Moreover, stress induced suspension bending results in higher order mode filter operation which in turn degrades the optical resolution of the filter. Therefore, the deposition process is optimized to achieve both near zero absorption and low residual stress. High energy ion bombardment during film deposition is utilized to reduce the film density, and hence the film compressive stress. Utilizing this technique, the compressive stress of Nb2O5 is reduced by ~43%, while that for SiO2 is reduced by ~40%. Filters fabricated with stress reduced films show curvatures as low as 100 nm for 70 μm mirrors. To reduce the stress induced bending in the free standing filter suspensions, a stress optimized multi-layer suspension design is presented; with a tensile stressed metal sandwiched between two compressively stressed films. The stress in Physical Vapor Deposited (PVD) metals is therefore characterized for use as filter top-electrode and stress compensating layer. Surface micromachining is used to fabricate tunable FP filters in the visible spectral range using the above mentioned design. The upward bending of the suspensions is reduced from several micrometers to less than 100 nm and 250 nm for two different suspension layer combinations. Mechanical tuning of up to 188 nm is obtained by applying 40 V of actuation voltage. Alternatively, a filter line with transmission of 65.5%, Full Width at Half Maximum (FWHM) of 10.5 nm and a stopband of 170 nm (at an output wavelength of 594 nm) is achieved. Numerical model simulations are also performed to study the validity of the stress optimized suspension design for the near infrared spectral range, wherein membrane displacement and suspension deformation due to material residual stress is studied. Two bandpass filter designs based on quarter-wave and non-quarter-wave layers are presented as integral components of the TOSA. With a filter passband of 135 nm and a broad stopband of over 650 nm, high average filter transmission of 88% is achieved inside the passband, while maximum filter transmission of less than 1.6% outside the passband is achieved.
Resumo:
The potential of visible-near infrared spectra, obtained using a light backscatter sensor, in conjunction with chemometrics, to predict curd moisture and whey fat content in a cheese vat was examined. A three-factor (renneting temperature, calcium chloride, cutting time), central composite design was carried out in triplicate. Spectra (300–1,100 nm) of the product in the cheese vat were captured during syneresis using a prototype light backscatter sensor. Stirring followed upon cutting the gel, and samples of curd and whey were removed at 10 min intervals and analyzed for curd moisture and whey fat content. Spectral data were used to develop models for predicting curd moisture and whey fat contents using partial least squares regression. Subjecting the spectral data set to Jack-knifing improved the accuracy of the models. The whey fat models (R = 0.91, 0.95) and curd moisture model (R = 0.86, 0.89) provided good and approximate predictions, respectively. Visible-near infrared spectroscopy was found to have potential for the prediction of important syneresis indices in stirred cheese vats.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)