988 resultados para incoherent imaging system
Resumo:
The set of host- and pathogen-specific molecular features of a disease comprise its “signature”. We hypothesize that biological signatures enables distinctions between vaccinated vs. infected individuals. In our research, using porcine samples, protocols were developed that could also be used to identify biological signatures of human disease. Different classes of molecular features will be tested during this project, including indicators of basic immune capacity, which are being studied at this instance. These indicators of basic immune response such as porcine cytokines and antibodies were validated using Enzyme-linked immunosorbent assay (ELISA). This is an established method that detects antigens by their interaction with a specific antibody coupled to a polystyrene substrate. Serum from naïve and vaccinated pigs was tested for the presence of cytokines. We were able to differentiate the presence of porcine IL-6 in normal porcine serum with or without added porcine IL-6 by ELISA. In addition, four different cytokines were spotted on a grating-coupled surface plasmon resonance imaging system (GCSPRI) chip and antibody specific for IL-8 was run over the chip. Only the presence of IL-8 was detected; therefore, there was no cross-reactivity in this combination of antigens and antibodies. This system uses a multiplexed sensor chip to identify components of a sample run over it. The detection is accomplished by the change in refractive index caused by the interaction between the antibody spotted on the sensor chip and the antigen present in the sample. As the multiplexed GCSPRI is developed, we will need to optimize both sensitivity and specificity, minimizing the potential for cross-reactivity between individual analytes. The next step in this project is to increase the sensitivity of detection of the analytes. Currently, we are using two different antibodies (that recognize a different part of the antigen) to amplify the signal emitted by the interaction of antibody with its cognate antigen. The development of this sensor chip would not only allow to detect FMD virus, but also to differentiate between infected and vaccinated individuals, on location. Furthermore, the diagnosis of other diseases could be done with increased accuracy, and in less time due to the microarray approach.
Resumo:
Arctic permafrost may be adversely affected by climate change in a number of ways, so that establishing a world-wide monitoring program seems imperative. This thesis evaluates possibilities for permafrost monitoring at the example of a permafrost site on Svalbard, Norway. An energy balance model for permafrost temperatures is developed that evaluates the different components of the surface energy budget in analogy to climate models. The surface energy budget, consisting of radiation components, sensible and latent heat fluxes as well as the ground heat flux, is measured over the course of one year, which has not been accomplished for arctic land areas so far. A considerable small-scale heterogeneity of the summer surface temperature is observed in long-term measurements with a thermal imaging system, which can be reproduced in the energy balance model. The model can also simulate the impact of different snow depths on the soil temperature, that has been documented in field measurements. Furthermore, time series of terrestrial surface temperature measurements are compared to satellite-borne measurements, for which a significant cold-bias is observed during winter. Finally, different possibilities for a world-wide monitoring scheme are assessed. Energy budget models can incorporate different satellite data sets as training data sets for parameter estimation, so that they may constitute an alternative to purely satellite-based schemes.
Resumo:
Manual and low-tech well drilling techniques have potential to assist in reaching the United Nations' millennium development goal for water in sub-Saharan Africa. This study used publicly available geospatial data in a regression tree analysis to predict groundwater depth in the Zinder region of Niger to identify suitable areas for manual well drilling. Regression trees were developed and tested on a database for 3681 wells in the Zinder region. A tree with 17 terminal leaves provided a range of ground water depth estimates that were appropriate for manual drilling, though much of the tree's complexity was associated with depths that were beyond manual methods. A natural log transformation of groundwater depth was tested to see if rescaling dataset variance would result in finer distinctions for regions of shallow groundwater. The RMSE for a log-transformed tree with only 10 terminal leaves was almost half that of the untransformed 17 leaf tree for groundwater depths less than 10 m. This analysis indicated important groundwater relationships for commonly available maps of geology, soils, elevation, and enhanced vegetation index from the MODIS satellite imaging system.
Resumo:
Independent measurements of radiation, sensible and latent heat fluxes and the ground heat flux are used to describe the annual cycle of the surface energy budget at a high-arctic permafrost site on Svalbard. During summer, the net short-wave radiation is the dominant energy source, while well developed turbulent processes and the heat flux in the ground lead to a cooling of the surface. About 15% of the net radiation is consumed by the seasonal thawing of the active layer in July and August. The Bowen ratio is found to vary between 0.25 and 2, depending on water content of the uppermost soil layer. During the polar night in winter, the net long-wave radiation is the dominant energy loss channel for the surface, which is mainly compensated by the sensible heat flux and, to a lesser extent, by the ground heat flux, which originates from the refreezing of the active layer. The average annual sensible heat flux of -6.9 W/m**2 is composed of strong positive fluxes in July and August, while negative fluxes dominate during the rest of the year. With 6.8 W/m**2, the latent heat flux more or less compensates the sensible heat flux in the annual average. Strong evaporation occurs during the snow melt period and particularly during the snow-free period in summer and fall. When the ground is covered by snow, latent heat fluxes through sublimation of snow are recorded, but are insignificant for the average surface energy budget. The near-surface atmospheric stratification is found to be predominantly unstable to neutral, when the ground is snow-free, and stable to neutral for snow-covered ground. Due to long-lasting near-surface inversions in winter, an average temperature difference of approximately 3 K exists between the air temperature at 10 m height and the surface temperature of the snow.
Resumo:
The ground surface temperature is one of the key parameters that determine the thermal regime of permafrost soils in arctic regions. Due to remoteness of most permafrost areas, monitoring of the land surface temperature (LST) through remote sensing is desirable. However, suitable satellite platforms such as MODIS provide spatial resolutions, that cannot resolve the considerable small-scale heterogeneity of the surface conditions characteristic for many permafrost areas. This study investigates the spatial variability of summer surface temperatures of high-arctic tundra on Svalbard, Norway. A thermal imaging system mounted on a mast facilitates continuous monitoring of approximately 100 x 100 m of tundra with a wide variability of different surface covers and soil moisture conditions over the entire summer season from the snow melt until fall. The net radiation is found to be a control parameter for the differences in surface temperature between wet and dry areas. Under clear-sky conditions in July, the differences in surface temperature between wet and dry areas reach up to 10K. The spatial differences reduce strongly in weekly averages of the surface temperature, which are relevant for the soil temperature evolution of deeper layers. Nevertheless, a considerable variability remains, with maximum differences between wet and dry areas of 3 to 4K. Furthermore, the pattern of snow patches and snow-free areas during snow melt in July causes even greater differences of more than 10K in the weekly averages. Towards the end of the summer season, the differences in surface temperature gradually diminish. Due to the pronounced spatial variability in July, the accumulated degree-day totals of the snow-free period can differ by more than 60% throughout the study area. The terrestrial observations from the thermal imaging system are compared to measurements of the land surface temperature from the MODIS sensor. During periods with frequent clear-sky conditions and thus a high density of satellite data, weekly averages calculated from the thermal imaging system and from MODIS LST agree within less than 2K. Larger deviations occur when prolonged cloudy periods prevent satellite measurements. Futhermore, the employed MODIS L2 LST data set contains a number of strongly biased measurements, which suggest an admixing of cloud top temperatures. We conclude that a reliable gap filling procedure to moderate the impact of prolonged cloudy periods would be of high value for a future LST-based permafrost monitoring scheme. The occurrence of sustained subpixel variability of the summer surface temperature is a complicating factor, whose impact needs to be assessed further in conjunction with other spatially variable parameters such as the snow cover and soil properties.
Resumo:
Aims. We carried out an investigation of the surface variegation of comet 67P/Churyumov-Gerasimenko, the detection of regions showing activity, the determination of active and inactive surface regions of the comet with spectral methods, and the detection of fallback material. Methods. We analyzed multispectral data generated with Optical, Spectroscopic, and Infrared Remote Imaging System (OSIRIS) narrow angle camera (NAC) observations via spectral techniques, reflectance ratios, and spectral slopes in order to study active regions. We applied clustering analysis to the results of the reflectance ratios, and introduced the new technique of activity thresholds to detect areas potentially enriched in volatiles. Results. Local color inhomogeneities are detected over the investigated surface regions. Active regions, such as Hapi, the active pits of Seth and Ma'at, the clustered and isolated bright features in Imhotep, the alcoves in Seth and Ma'at, and the large alcove in Anuket, have bluer spectra than the overall surface. The spectra generated with OSIRIS NAC observations are dominated by cometary emissions of around 700 nm to 750 nm as a result of the coma between the comet's surface and the camera. One of the two isolated bright features in the Imhotep region displays an absorption band of around 700 nm, which probably indicates the existence of hydrated silicates. An absorption band with a center between 800-900 nm is tentatively observed in some regions of the nucleus surface. This absorption band can be explained by the crystal field absorption of Fe2+, which is a common spectral feature seen in silicates.
Resumo:
One important issue emerging strongly in agriculture is related with the automatization of tasks, where the optical sensors play an important role. They provide images that must be conveniently processed. The most relevantimage processing procedures require the identification of green plants, in our experiments they come from barley and corn crops including weeds, so that some types of action can be carried out, including site-specific treatments with chemical products or mechanical manipulations. Also the identification of textures belonging to the soil could be useful to know some variables, such as humidity, smoothness or any others. Finally, from the point of view of the autonomous robot navigation, where the robot is equipped with the imaging system, some times it is convenient to know not only the soil information and the plants growing in the soil but also additional information supplied by global references based on specific areas. This implies that the images to be processed contain textures of three main types to be identified: green plants, soil and sky if any. This paper proposes a new automatic approach for segmenting these main textures and also to refine the identification of sub-textures inside the main ones. Concerning the green identification, we propose a new approach that exploits the performance of existing strategies by combining them. The combination takes into account the relevance of the information provided by each strategy based on the intensity variability. This makes an important contribution. The combination of thresholding approaches, for segmenting the soil and the sky, makes the second contribution; finally the adjusting of the supervised fuzzy clustering approach for identifying sub-textures automatically, makes the third finding. The performance of the method allows to verify its viability for automatic tasks in agriculture based on image processing
Resumo:
Fresh-cut or minimally processed fruit and vegetables have been physically modified from its original form (by peeling, trimming, washing and cutting) to obtain a 100% edible product that is subsequently packaged (usually under modified atmosphere packaging –MAP) and kept in refrigerated storage. In fresh-cut products, physiological activity and microbiological spoilage, determine their deterioration and shelf-life. The major preservation techniques applied to delay spoilage are chilling storage and MAP, combined with chemical treatments antimicrobial solutions antibrowning, acidulants, antioxidants, etc.). The industry looks for safer alternatives. Consequently, the sector is asking for innovative, fast, cheap and objective techniques to evaluate the overall quality and safety of fresh-cut products in order to obtain decision tools for implementing new packaging materials and procedures. In recent years, hyperspectral imaging technique has been regarded as a tool for analyses conducted for quality evaluation of food products in research, control and industries. The hyperspectral imaging system allows integrating spectroscopic and imaging techniques to enable direct identification of different components or quality characteristics and their spatial distribution in the tested sample. The objective of this work is to develop hyperspectral image processing methods for the supervision through plastic films of changes related to quality deterioration in packed readyto-use leafy vegetables during shelf life. The evolutions of ready-to-use spinach and watercress samples covered with three different common transparent plastic films were studied. Samples were stored at 4 ºC during the monitoring period (until 21 days). More than 60 hyperspectral images (from 400 to 1000 nm) per species were analyzed using ad hoc routines and commercial toolboxes of MatLab®. Besides common spectral treatments for removing additive and multiplicative effects, additional correction, previously to any other correction, was performed in the images of leaves in order to avoid the modification in their spectra due to the presence of the plastic transparent film. Findings from this study suggest that the developed images analysis system is able to deal with the effects caused in the images by the presence of plastic films in the supervision of shelf-life in leafy vegetables, in which different stages of quality has been identified.
Resumo:
We demonstrate the capability of a laser micromachining workstation for cost-effective manufacturing of a variety of microfluidic devices, including SU-8 microchannels on silicon wafers and 3D complex structures made on polyimide Kapton® or poly carbonate (PC). The workstation combines a KrF excimer laser at 248 nm and a Nd3+:YVO4 DPSS with a frequency tripled at 355 nm with a lens magnification 10X, both lasers working at a pulsed regime with nanoseconds (ns) pulse duration. Workstation also includes a high-resolution motorized XYZ-tilt axis (~ 1 um / axis) and a Through The Lens (TTL) imaging system for a high accurate positioning over a 120 x 120 mm working area. We have surveyed different fabrication techniques: direct writing lithography,mask manufacturing for contact lithography and polymer laser ablation for complex 3D devices, achieving width channels down to 13μ m on 50μ m SU-8 thickness using direct writing lithography, and width channels of 40 μm for polyimide on SiO2 plate. Finally, we have tested the use of some devices for capillary chips measuring the flow speed for liquids with different viscosities. As a result, we have characterized the presence of liquid in the channel by interferometric microscopy.
Resumo:
This work analysed the feasibility of using a fast, customized Monte Carlo (MC) method to perform accurate computation of dose distributions during pre- and intraplanning of intraoperative electron radiation therapy (IOERT) procedures. The MC method that was implemented, which has been integrated into a specific innovative simulation and planning tool, is able to simulate the fate of thousands of particles per second, and it was the aim of this work to determine the level of interactivity that could be achieved. The planning workflow enabled calibration of the imaging and treatment equipment, as well as manipulation of the surgical frame and insertion of the protection shields around the organs at risk and other beam modifiers. In this way, the multidisciplinary team involved in IOERT has all the tools necessary to perform complex MC dosage simulations adapted to their equipment in an efficient and transparent way. To assess the accuracy and reliability of this MC technique, dose distributions for a monoenergetic source were compared with those obtained using a general-purpose software package used widely in medical physics applications. Once accuracy of the underlying simulator was confirmed, a clinical accelerator was modelled and experimental measurements in water were conducted. A comparison was made with the output from the simulator to identify the conditions under which accurate dose estimations could be obtained in less than 3 min, which is the threshold imposed to allow for interactive use of the tool in treatment planning. Finally, a clinically relevant scenario, namely early-stage breast cancer treatment, was simulated with pre- and intraoperative volumes to verify that it was feasible to use the MC tool intraoperatively and to adjust dose delivery based on the simulation output, without compromising accuracy. The workflow provided a satisfactory model of the treatment head and the imaging system, enabling proper configuration of the treatment planning system and providing good accuracy in the dosage simulation.
Resumo:
Los sistemas de imagen por ultrasonidos son hoy una herramienta indispensable en aplicaciones de diagnóstico en medicina y son cada vez más utilizados en aplicaciones industriales en el área de ensayos no destructivos. El array es el elemento primario de estos sistemas y su diseño determina las características de los haces que se pueden construir (forma y tamaño del lóbulo principal, de los lóbulos secundarios y de rejilla, etc.), condicionando la calidad de las imágenes que pueden conseguirse. En arrays regulares la distancia máxima entre elementos se establece en media longitud de onda para evitar la formación de artefactos. Al mismo tiempo, la resolución en la imagen de los objetos presentes en la escena aumenta con el tamaño total de la apertura, por lo que una pequeña mejora en la calidad de la imagen se traduce en un aumento significativo del número de elementos del transductor. Esto tiene, entre otras, las siguientes consecuencias: Problemas de fabricación de los arrays por la gran densidad de conexiones (téngase en cuenta que en aplicaciones típicas de imagen médica, el valor de la longitud de onda es de décimas de milímetro) Baja relación señal/ruido y, en consecuencia, bajo rango dinámico de las señales por el reducido tamaño de los elementos. Complejidad de los equipos que deben manejar un elevado número de canales independientes. Por ejemplo, se necesitarían 10.000 elementos separados λ 2 para una apertura cuadrada de 50 λ. Una forma sencilla para resolver estos problemas existen alternativas que reducen el número de elementos activos de un array pleno, sacrificando hasta cierto punto la calidad de imagen, la energía emitida, el rango dinámico, el contraste, etc. Nosotros planteamos una estrategia diferente, y es desarrollar una metodología de optimización capaz de hallar de forma sistemática configuraciones de arrays de ultrasonido adaptados a aplicaciones específicas. Para realizar dicha labor proponemos el uso de los algoritmos evolutivos para buscar y seleccionar en el espacio de configuraciones de arrays aquellas que mejor se adaptan a los requisitos fijados por cada aplicación. En la memoria se trata el problema de la codificación de las configuraciones de arrays para que puedan ser utilizados como individuos de la población sobre la que van a actuar los algoritmos evolutivos. También se aborda la definición de funciones de idoneidad que permitan realizar comparaciones entre dichas configuraciones de acuerdo con los requisitos y restricciones de cada problema de diseño. Finalmente, se propone emplear el algoritmo multiobjetivo NSGA II como herramienta primaria de optimización y, a continuación, utilizar algoritmos mono-objetivo tipo Simulated Annealing para seleccionar y retinar las soluciones proporcionadas por el NSGA II. Muchas de las funciones de idoneidad que definen las características deseadas del array a diseñar se calculan partir de uno o más patrones de radiación generados por cada solución candidata. La obtención de estos patrones con los métodos habituales de simulación de campo acústico en banda ancha requiere tiempos de cálculo muy grandes que pueden hacer inviable el proceso de optimización con algoritmos evolutivos en la práctica. Como solución, se propone un método de cálculo en banda estrecha que reduce en, al menos, un orden de magnitud el tiempo de cálculo necesario Finalmente se presentan una serie de ejemplos, con arrays lineales y bidimensionales, para validar la metodología de diseño propuesta comparando experimentalmente las características reales de los diseños construidos con las predicciones del método de optimización. ABSTRACT Currently, the ultrasound imaging system is one of the powerful tools in medical diagnostic and non-destructive testing for industrial applications. Ultrasonic arrays design determines the beam characteristics (main and secondary lobes, beam pattern, etc...) which assist to enhance the image resolution. The maximum distance between the elements of the array should be the half of the wavelength to avoid the formation of grating lobes. At the same time, the image resolution of the target in the region of interest increases with the aperture size. Consequently, the larger number of elements in arrays assures the better image quality but this improvement contains the following drawbacks: Difficulties in the arrays manufacturing due to the large connection density. Low noise to signal ratio. Complexity of the ultrasonic system to handle large number of channels. The easiest way to resolve these issues is to reduce the number of active elements in full arrays, but on the other hand the image quality, dynamic range, contrast, etc, are compromised by this solutions In this thesis, an optimization methodology able to find ultrasound array configurations adapted for specific applications is presented. The evolutionary algorithms are used to obtain the ideal arrays among the existing configurations. This work addressed problems such as: the codification of ultrasound arrays to be interpreted as individuals in the evolutionary algorithm population and the fitness function and constraints, which will assess the behaviour of individuals. Therefore, it is proposed to use the multi-objective algorithm NSGA-II as a primary optimization tool, and then use the mono-objective Simulated Annealing algorithm to select and refine the solutions provided by the NSGA I I . The acoustic field is calculated many times for each individual and in every generation for every fitness functions. An acoustic narrow band field simulator, where the number of operations is reduced, this ensures a quick calculation of the acoustic field to reduce the expensive computing time required by these functions we have employed. Finally a set of examples are presented in order to validate our proposed design methodology, using linear and bidimensional arrays where the actual characteristics of the design are compared with the predictions of the optimization methodology.
Resumo:
Our current understanding of the sound-generating mechanism in the songbird vocal organ, the syrinx, is based on indirect evidence and theoretical treatments. The classical avian model of sound production postulates that the medial tympaniform membranes (MTM) are the principal sound generators. We tested the role of the MTM in sound generation and studied the songbird syrinx more directly by filming it endoscopically. After we surgically incapacitated the MTM as a vibratory source, zebra finches and cardinals were not only able to vocalize, but sang nearly normal song. This result shows clearly that the MTM are not the principal sound source. The endoscopic images of the intact songbird syrinx during spontaneous and brain stimulation-induced vocalizations illustrate the dynamics of syringeal reconfiguration before phonation and suggest a different model for sound production. Phonation is initiated by rostrad movement and stretching of the syrinx. At the same time, the syrinx is closed through movement of two soft tissue masses, the medial and lateral labia, into the bronchial lumen. Sound production always is accompanied by vibratory motions of both labia, indicating that these vibrations may be the sound source. However, because of the low temporal resolution of the imaging system, the frequency and phase of labial vibrations could not be assessed in relation to that of the generated sound. Nevertheless, in contrast to the previous model, these observations show that both labia contribute to aperture control and strongly suggest that they play an important role as principal sound generators.
Resumo:
"Snapshot" images of localized Ca2+ influx into patch-clamped chromaffin cells were captured by using a recently developed pulsed-laser imaging system. Transient opening of voltage-sensitive Ca2+ channels gave rise to localized elevations of Ca2+ that had the appearance of either "hotspots" or partial rings found immediately beneath the plasma membrane. When the Ca2+ imaging technique was employed in conjunction with flame-etched carbon-fiber electrodes to spatially map the release sites of catecholamines, it was observed that the sites of Ca2+ entry and catecholamine release were colocalized. These results provide functional support for the idea that secretion occurs from "active zone"-like structures in neuroendocrine cells.
Resumo:
LIDAR (LIght Detection And Ranging) first return elevation data of the Boston, Massachusetts region from MassGIS at 1-meter resolution. This LIDAR data was captured in Spring 2002. LIDAR first return data (which shows the highest ground features, e.g. tree canopy, buildings etc.) can be used to produce a digital terrain model of the Earth's surface. This dataset consists of 74 First Return DEM tiles. The tiles are 4km by 4km areas corresponding with the MassGIS orthoimage index. This data set was collected using 3Di's Digital Airborne Topographic Imaging System II (DATIS II). The area of coverage corresponds to the following MassGIS orthophoto quads covering the Boston region (MassGIS orthophoto quad ID: 229890, 229894, 229898, 229902, 233886, 233890, 233894, 233898, 233902, 233906, 233910, 237890, 237894, 237898, 237902, 237906, 237910, 241890, 241894, 241898, 241902, 245898, 245902). The geographic extent of this dataset is the same as that of the MassGIS dataset: Boston, Massachusetts Region 1:5,000 Color Ortho Imagery (1/2-meter Resolution), 2001 and was used to produce the MassGIS dataset: Boston, Massachusetts, 2-Dimensional Building Footprints with Roof Height Data (from LIDAR data), 2002 [see cross references].
Resumo:
This dataset consists of 2D footprints of the buildings in the metropolitan Boston area, based on tiles in the orthoimage index (orthophoto quad ID: 229890, 229894, 229898, 229902, 233886, 233890, 233894, 233898, 233902, 237890, 237894, 237898, 237902, 241890, 241894, 241898, 241902, 245898, 245902). This data set was collected using 3Di's Digital Airborne Topographic Imaging System II (DATIS II). Roof height and footprint elevation attributes (derived from 1-meter resolution LIDAR (LIght Detection And Ranging) data) are included as part of each building feature. This data can be combined with other datasets to create 3D representations of buildings and the surrounding environment.