925 resultados para OPTICAL PERFORMANCE MONITORING


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optical coherence tomography (OCT) is a noninvasive three-dimensional interferometric imaging technique capable of achieving micrometer scale resolution. It is now a standard of care in ophthalmology, where it is used to improve the accuracy of early diagnosis, to better understand the source of pathophysiology, and to monitor disease progression and response to therapy. In particular, retinal imaging has been the most prevalent clinical application of OCT, but researchers and companies alike are developing OCT systems for cardiology, dermatology, dentistry, and many other medical and industrial applications.

Adaptive optics (AO) is a technique used to reduce monochromatic aberrations in optical instruments. It is used in astronomical telescopes, laser communications, high-power lasers, retinal imaging, optical fabrication and microscopy to improve system performance. Scanning laser ophthalmoscopy (SLO) is a noninvasive confocal imaging technique that produces high contrast two-dimensional retinal images. AO is combined with SLO (AOSLO) to compensate for the wavefront distortions caused by the optics of the eye, providing the ability to visualize the living retina with cellular resolution. AOSLO has shown great promise to advance the understanding of the etiology of retinal diseases on a cellular level.

Broadly, we endeavor to enhance the vision outcome of ophthalmic patients through improved diagnostics and personalized therapy. Toward this end, the objective of the work presented herein was the development of advanced techniques for increasing the imaging speed, reducing the form factor, and broadening the versatility of OCT and AOSLO. Despite our focus on applications in ophthalmology, the techniques developed could be applied to other medical and industrial applications. In this dissertation, a technique to quadruple the imaging speed of OCT was developed. This technique was demonstrated by imaging the retinas of healthy human subjects. A handheld, dual depth OCT system was developed. This system enabled sequential imaging of the anterior segment and retina of human eyes. Finally, handheld SLO/OCT systems were developed, culminating in the design of a handheld AOSLO system. This system has the potential to provide cellular level imaging of the human retina, resolving even the most densely packed foveal cones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].

Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.

As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.

More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.

With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.

Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.

With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.

Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.

Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effects of vehicle speed for Structural Health Monitoring (SHM) of bridges under operational conditions are studied in this paper. The moving vehicle is modelled as a single degree oscillator traversing a damaged beam at a constant speed. The bridge is modelled as simply supported Euler-Bernoulli beam with a breathing crack. The breathing crack is treated as a nonlinear system with bilinear stiffness characteristics related to the opening and closing of crack. The unevenness of the bridge deck is modelled using road classification according to ISO 8606:1995(E). The stochastic description of the unevenness of the road surface is used as an aid to monitor the health of the structure in its operational condition. Numerical simulations are conducted considering the effects of changing vehicle speed with regards to cumulant based statistical damage detection parameters. The detection and calibration of damage at different levels is based on an algorithm dependent on responses of the damaged beam due to passages of the load. Possibilities of damage detection and calibration under benchmarked and non-benchmarked cases are considered. Sensitivity of calibration values is studied. The findings of this paper are important for establishing the expectations from different vehicle speeds on a bridge for damage detection purposes using bridge-vehicle interaction where the bridge does not need to be closed for monitoring. The identification of bunching of these speed ranges provides guidelines for using the methodology developed in the paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The absence of rapid, low cost and highly sensitive biodetection platform has hindered the implementation of next generation cheap and early stage clinical or home based point-of-care diagnostics. Label-free optical biosensing with high sensitivity, throughput, compactness, and low cost, plays an important role to resolve these diagnostic challenges and pushes the detection limit down to single molecule. Optical nanostructures, specifically the resonant waveguide grating (RWG) and nano-ribbon cavity based biodetection are promising in this context. The main element of this dissertation is design, fabrication and characterization of RWG sensors for different spectral regions (e.g. visible, near infrared) for use in label-free optical biosensing and also to explore different RWG parameters to maximize sensitivity and increase detection accuracy. Design and fabrication of the waveguide embedded resonant nano-cavity are also studied. Multi-parametric analyses were done using customized optical simulator to understand the operational principle of these sensors and more important the relationship between the physical design parameters and sensor sensitivities. Silicon nitride (SixNy) is a useful waveguide material because of its wide transparency across the whole infrared, visible and part of UV spectrum, and comparatively higher refractive index than glass substrate. SixNy based RWGs on glass substrate are designed and fabricated applying both electron beam lithography and low cost nano-imprint lithography techniques. A Chromium hard mask aided nano-fabrication technique is developed for making very high aspect ratio optical nano-structure on glass substrate. An aspect ratio of 10 for very narrow (~60 nm wide) grating lines is achieved which is the highest presented so far. The fabricated RWG sensors are characterized for both bulk (183.3 nm/RIU) and surface sensitivity (0.21nm/nm-layer), and then used for successful detection of Immunoglobulin-G (IgG) antibodies and antigen (~1μg/ml) both in buffer and serum. Widely used optical biosensors like surface plasmon resonance and optical microcavities are limited in the separation of bulk response from the surface binding events which is crucial for ultralow biosensing application with thermal or other perturbations. A RWG based dual resonance approach is proposed and verified by controlled experiments for separating the response of bulk and surface sensitivity. The dual resonance approach gives sensitivity ratio of 9.4 whereas the competitive polarization based approach can offer only 2.5. The improved performance of the dual resonance approach would help reducing probability of false reading in precise bio-assay experiments where thermal variations are probable like portable diagnostics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of structural health monitoring of civil structures is ever expanding and by assessing the dynamical condition of structures, informed maintenance management can be conducted at both individual and network levels. With the continued growth of information age technology, the potential arises for smart monitoring systems to be integrated with civil infrastructure to provide efficient information on the condition of a structure. The focus of this thesis is the integration of smart technology with civil infrastructure for the purposes of structural health monitoring. The technology considered in this regard are devices based on energy harvesting materials. While there has been considerable focus on the development and optimisation of such devices using steady state loading conditions, their applications for civil infrastructure are less known. Although research is still in initial stages, studies into the uses associated with such applications are very promising. Through the use of the dynamical response of structures to a variety of loading conditions, the energy harvesting outputs from such devices is established and the potential power output determined. Through a power variance output approach, damage detection of deteriorating structures using the energy harvesting devices is investigated. Further applications of the integration of energy harvesting devices with civil infrastructure investigated by this research includes the use of the power output as a indicator for control. Four approaches are undertaken to determine the potential applications arising from integrating smart technology with civil infrastructure, namely • Theoretical analysis to determine the applications of energy harvesting devices for vibration based health monitoring of civil infrastructure. • Laboratory experimentation to verify the performance of different energy harvesting configurations for civil infrastructure applications. • Scaled model testing as a method to experimentally validate the integration of the energy harvesting devices with civil infrastructure. • Full scale deployment of energy harvesting device with a bridge structure. These four approaches validate the application of energy harvesting technology with civil infrastructure from a theoretical, experimental and practical perspective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The amount and quality of available biomass is a key factor for the sustainable livestock industry and agricultural management related decision making. Globally 31.5% of land cover is grassland while 80% of Ireland’s agricultural land is grassland. In Ireland, grasslands are intensively managed and provide the cheapest feed source for animals. This dissertation presents a detailed state of the art review of satellite remote sensing of grasslands, and the potential application of optical (Moderate–resolution Imaging Spectroradiometer (MODIS)) and radar (TerraSAR-X) time series imagery to estimate the grassland biomass at two study sites (Moorepark and Grange) in the Republic of Ireland using both statistical and state of the art machine learning algorithms. High quality weather data available from the on-site weather station was also used to calculate the Growing Degree Days (GDD) for Grange to determine the impact of ancillary data on biomass estimation. In situ and satellite data covering 12 years for the Moorepark and 6 years for the Grange study sites were used to predict grassland biomass using multiple linear regression, Neuro Fuzzy Inference Systems (ANFIS) models. The results demonstrate that a dense (8-day composite) MODIS image time series, along with high quality in situ data, can be used to retrieve grassland biomass with high performance (R2 = 0:86; p < 0:05, RMSE = 11.07 for Moorepark). The model for Grange was modified to evaluate the synergistic use of vegetation indices derived from remote sensing time series and accumulated GDD information. As GDD is strongly linked to the plant development, or phonological stage, an improvement in biomass estimation would be expected. It was observed that using the ANFIS model the biomass estimation accuracy increased from R2 = 0:76 (p < 0:05) to R2 = 0:81 (p < 0:05) and the root mean square error was reduced by 2.72%. The work on the application of optical remote sensing was further developed using a TerraSAR-X Staring Spotlight mode time series over the Moorepark study site to explore the extent to which very high resolution Synthetic Aperture Radar (SAR) data of interferometrically coherent paddocks can be exploited to retrieve grassland biophysical parameters. After filtering out the non-coherent plots it is demonstrated that interferometric coherence can be used to retrieve grassland biophysical parameters (i. e., height, biomass), and that it is possible to detect changes due to the grass growth, and grazing and mowing events, when the temporal baseline is short (11 days). However, it not possible to automatically uniquely identify the cause of these changes based only on the SAR backscatter and coherence, due to the ambiguity caused by tall grass laid down due to the wind. Overall, the work presented in this dissertation has demonstrated the potential of dense remote sensing and weather data time series to predict grassland biomass using machine-learning algorithms, where high quality ground data were used for training. At present a major limitation for national scale biomass retrieval is the lack of spatial and temporal ground samples, which can be partially resolved by minor modifications in the existing PastureBaseIreland database by adding the location and extent ofeach grassland paddock in the database. As far as remote sensing data requirements are concerned, MODIS is useful for large scale evaluation but due to its coarse resolution it is not possible to detect the variations within the fields and between the fields at the farm scale. However, this issue will be resolved in terms of spatial resolution by the Sentinel-2 mission, and when both satellites (Sentinel-2A and Sentinel-2B) are operational the revisit time will reduce to 5 days, which together with Landsat-8, should enable sufficient cloud-free data for operational biomass estimation at a national scale. The Synthetic Aperture Radar Interferometry (InSAR) approach is feasible if there are enough coherent interferometric pairs available, however this is difficult to achieve due to the temporal decorrelation of the signal. For repeat-pass InSAR over a vegetated area even an 11 days temporal baseline is too large. In order to achieve better coherence a very high resolution is required at the cost of spatial coverage, which limits its scope for use in an operational context at a national scale. Future InSAR missions with pair acquisition in Tandem mode will minimize the temporal decorrelation over vegetation areas for more focused studies. The proposed approach complements the current paradigm of Big Data in Earth Observation, and illustrates the feasibility of integrating data from multiple sources. In future, this framework can be used to build an operational decision support system for retrieval of grassland biophysical parameters based on data from long term planned optical missions (e. g., Landsat, Sentinel) that will ensure the continuity of data acquisition. Similarly, Spanish X-band PAZ and TerraSAR-X2 missions will ensure the continuity of TerraSAR-X and COSMO-SkyMed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sensors for real-time monitoring of environmental contaminants are essential for protecting ecosystems and human health. Refractive index sensing is a non-selective technique that can be used to measure almost any analyte. Miniaturized refractive index sensors, such as silicon-on-insulator (SOI) microring resonators are one possible platform, but require coatings selective to the analytes of interest. A homemade prism refractometer is reported and used to characterize the interactions between polymer films and liquid or vapour-phase analytes. A camera was used to capture both Fresnel reflection and total internal reflection within the prism. For thin-films (d = 10 μm - 100 μm), interference fringes were also observed. Fourier analysis of the interferogram allowed for simultaneous extraction of the average refractive index and film thickness with accuracies of ∆n = 1-7 ×10-4 and ∆d < 3-5%. The refractive indices of 29 common organic solvents as well as aqueous solutions of sodium chloride, sucrose, ethylene glycol, glycerol, and dimethylsulfoxide were measured at λ = 1550 nm. These measurements will be useful for future calibrations of near-infrared refractive index sensors. A mathematical model is presented, where the concentration of analyte adsorbed in a film can be calculated from the refractive index and thickness changes during uptake. This model can be used with Fickian diffusion models to measure the diffusion coefficients through the bulk film and at the film-substrate interface. The diffusion of water and other organic solvents into SU-8 epoxy was explored using refractometry and the diffusion coefficient of water into SU-8 is presented. Exposure of soft baked SU-8 films to acetone, acetonitrile and methanol resulted in rapid delamination. The diffusion of volatile organic compound (VOC) vapours into polydimethylsiloxane and polydimethyl-co-polydiphenylsiloxane polymers was also studied using refractometry. Diffusion and partition coefficients are reported for several analytes. As a model system, polydimethyl-co-diphenylsiloxane films were coated onto SOI microring resonators. After the development of data acquisition software, coated devices were exposed to VOCs and the refractive index response was assessed. More studies with other polymers are required to test the viability of this platform for environmental sensing applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To evaluate the performance of ocean-colour retrievals of total chlorophyll-a concentration requires direct comparison with concomitant and co-located in situ data. For global comparisons, these in situ match-ups should be ideally representative of the distribution of total chlorophyll-a concentration in the global ocean. The oligotrophic gyres constitute the majority of oceanic water, yet are under-sampled due to their inaccessibility and under-represented in global in situ databases. The Atlantic Meridional Transect (AMT) is one of only a few programmes that consistently sample oligotrophic waters. In this paper, we used a spectrophotometer on two AMT cruises (AMT19 and AMT22) to continuously measure absorption by particles in the water of the ship's flow-through system. From these optical data continuous total chlorophyll-a concentrations were estimated with high precision and accuracy along each cruise and used to evaluate the performance of ocean-colour algorithms. We conducted the evaluation using level 3 binned ocean-colour products, and used the high spatial and temporal resolution of the underway system to maximise the number of match-ups on each cruise. Statistical comparisons show a significant improvement in the performance of satellite chlorophyll algorithms over previous studies, with root mean square errors on average less than half (~ 0.16 in log10 space) that reported previously using global datasets (~ 0.34 in log10 space). This improved performance is likely due to the use of continuous absorption-based chlorophyll estimates, that are highly accurate, sample spatial scales more comparable with satellite pixels, and minimise human errors. Previous comparisons might have reported higher errors due to regional biases in datasets and methodological inconsistencies between investigators. Furthermore, our comparison showed an underestimate in satellite chlorophyll at low concentrations in 2012 (AMT22), likely due to a small bias in satellite remote-sensing reflectance data. Our results highlight the benefits of using underway spectrophotometric systems for evaluating satellite ocean-colour data and underline the importance of maintaining in situ observatories that sample the oligotrophic gyres.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To evaluate the performance of ocean-colour retrievals of total chlorophyll-a concentration requires direct comparison with concomitant and co-located in situ data. For global comparisons, these in situ match-ups should be ideally representative of the distribution of total chlorophyll-a concentration in the global ocean. The oligotrophic gyres constitute the majority of oceanic water, yet are under-sampled due to their inaccessibility and under-represented in global in situ databases. The Atlantic Meridional Transect (AMT) is one of only a few programmes that consistently sample oligotrophic waters. In this paper, we used a spectrophotometer on two AMT cruises (AMT19 and AMT22) to continuously measure absorption by particles in the water of the ship's flow-through system. From these optical data continuous total chlorophyll-a concentrations were estimated with high precision and accuracy along each cruise and used to evaluate the performance of ocean-colour algorithms. We conducted the evaluation using level 3 binned ocean-colour products, and used the high spatial and temporal resolution of the underway system to maximise the number of match-ups on each cruise. Statistical comparisons show a significant improvement in the performance of satellite chlorophyll algorithms over previous studies, with root mean square errors on average less than half (~ 0.16 in log10 space) that reported previously using global datasets (~ 0.34 in log10 space). This improved performance is likely due to the use of continuous absorption-based chlorophyll estimates, that are highly accurate, sample spatial scales more comparable with satellite pixels, and minimise human errors. Previous comparisons might have reported higher errors due to regional biases in datasets and methodological inconsistencies between investigators. Furthermore, our comparison showed an underestimate in satellite chlorophyll at low concentrations in 2012 (AMT22), likely due to a small bias in satellite remote-sensing reflectance data. Our results highlight the benefits of using underway spectrophotometric systems for evaluating satellite ocean-colour data and underline the importance of maintaining in situ observatories that sample the oligotrophic gyres.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Executive functions (EF) such as self-monitoring, planning, and organizing are known to develop through childhood and adolescence. They are of potential importance for learning and school performance. Earlier research into the relation between EF and school performance did not provide clear results possibly because confounding factors such as educational track, boy-girl differences, and parental education were not taken into account. The present study therefore investigated the relation between executive function tests and school performance in a highly controlled sample of 173 healthy adolescents aged 12–18. Only students in the pre-university educational track were used and the performance of boys was compared to that of girls. Results showed that there was no relation between the report marks obtained and the performance on executive function tests, notably the Sorting Test and the Tower Test of the Delis-Kaplan Executive Functions System (D-KEFS). Likewise, no relation was found between the report marks and the scores on the Behavior Rating Inventory of Executive Function—Self-Report Version (BRIEF-SR) after these were controlled for grade, sex, and level of parental education. The findings indicate that executive functioning as measured with widely used instruments such as the BRIEF-SR does not predict school performance of adolescents in preuniversity education any better than a student's grade, sex, and level of parental education.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissolved CO2 measurements are usually made using a Severinghaus electrode, which is bulky and can suffer from electrical interference. In contrast, optical sensors for gaseous CO2, whilst not suffering these problems, are mainly used for making gaseous (not dissolved) CO2 measurements, due to dye leaching and protonation, especially at high ionic strengths (>0.01 M) and acidity (<pH 4). This is usually prevented by coating the sensor with a gas-permeable, but ion-impermeable, membrane (GPM). Herein, we introduce a highly sensitive, colourimetric-based, plastic film sensor for the measurement of both gaseous and dissolved CO2, in which a pH-sensitive dye, thymol blue (TB) is coated onto particles of hydrophilic silica to create a CO2-sensitive, TB-based pigment, which is then extruded into low density polyethylene (LDPE) to create a GPM-free, i.e. naked, TB plastic sensor film for gaseous and dissolved CO2 measurements. When used for making dissolved CO2 measurements, the hydrophobic nature of the LDPE renders the film: (i) indifferent to ionic strength, (ii) highly resistant to acid attack and (iii) stable when stored under ambient (dark) conditions for >8 months, with no loss of colour or function. Here, the performance of the TB plastic film is primarily assessed as a dissolved CO2 sensor in highly saline (3.5 wt%) water. The TB film is blue in the absence of CO2 and yellow in its presence, exhibiting 50% transition in its colour at ca. 0.18% CO2. This new type of CO2 sensor has great potential in the monitoring of CO2 levels in the hydrosphere, as well as elsewhere, e.g. food packaging and possibly patient monitoring.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thermoforming industry has been relatively slow to embrace modern measurement technologies. As a result researchers have struggled to develop accurate thermoforming simulations as some of the key aspects of the process remain poorly understood. For the first time, this work reports the development of a prototype multivariable instrumentation system for use in thermoforming. The system contains sensors for plug force, plug displacement, air pressure and temperature, plug temperature, and sheet temperature. Initially, it was developed to fit the tooling on a laboratory thermoforming machine, but later its performance was validated by installing it on a similar industrial tool. Throughout its development, providing access for the various sensors and their cabling was the most challenging task. In testing, all of the sensors performed well and the data collected has given a powerful insight into the operation of the process. In particular, it has shown that both the air and plug temperatures stabilize at more than 80C during the continuous thermoforming of amorphous polyethylene terephthalate (aPET) sheet at 110C. The work also highlighted significant differences in the timing and magnitude of the cavity pressures reached in the two thermoforming machines. The prototype system has considerable potential for further development. 

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To maintain the pace of development set by Moore's law, production processes in semiconductor manufacturing are becoming more and more complex. The development of efficient and interpretable anomaly detection systems is fundamental to keeping production costs low. As the dimension of process monitoring data can become extremely high anomaly detection systems are impacted by the curse of dimensionality, hence dimensionality reduction plays an important role. Classical dimensionality reduction approaches, such as Principal Component Analysis, generally involve transformations that seek to maximize the explained variance. In datasets with several clusters of correlated variables the contributions of isolated variables to explained variance may be insignificant, with the result that they may not be included in the reduced data representation. It is then not possible to detect an anomaly if it is only reflected in such isolated variables. In this paper we present a new dimensionality reduction technique that takes account of such isolated variables and demonstrate how it can be used to build an interpretable and robust anomaly detection system for Optical Emission Spectroscopy data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In situ methods used for water quality assessment have both physical and time constraints. Just a limited number of sampling points can be performed due to this, making it difficult to capture the range and variability of coastal processes and constituents. In addition, the mixing between fresh and oceanic water creates complex physical, chemical and biological environment that are difficult to understand, causing the existing measurement methodologies to have significant logistical, technical, and economic challenges and constraints. Remote sensing of ocean colour makes it possible to acquire information on the distribution of chlorophyll and other constituents over large areas of the oceans in short periods. There are many potential applications of ocean colour data. Satellite-derived products are a key data source to study the distribution pattern of organisms and nutrients (Guillaud et al. 2008) and fishery research (Pillai and Nair 2010; Solanki et al. 2001. Also, the study of spatial and temporal variability of phytoplankton blooms, red tide identification or harmful algal blooms monitoring (Sarangi et al. 2001; Sarangi et al. 2004; Sarangi et al. 2005; Bhagirathan et al., 2014), river plume or upwelling assessments (Doxaran et al. 2002; Sravanthi et al. 2013), global productivity analyses (Platt et al. 1988; Sathyendranath et al. 1995; IOCCG2006) and oil spill detection (Maianti et al. 2014). For remote sensing to be accurate in the complex coastal waters, it has to be validated with the in situ measured values. In this thesis an attempt to study, measure and validate the complex waters with the help of satellite data has been done. Monitoring of coastal ecosystem health of Arabian Sea in a synoptic way requires an intense, extensive and continuous monitoring of the water quality indicators. Phytoplankton determined from chl-a concentration, is considered as an indicator of the state of the coastal ecosystems. Currently, satellite sensors provide the most effective means for frequent, synoptic, water-quality observations over large areas and represent a potential tool to effectively assess chl-a concentration over coastal and oceanic waters; however, algorithms designed to estimate chl-a at global scales have been shown to be less accurate in Case 2 waters, due to the presence of water constituents other than phytoplankton which do not co-vary with the phytoplankton. The constituents of Arabian Sea coastal waters are region-specific because of the inherent variability of these optically-active substances affected by factors such as riverine input (e.g. suspended matter type and grain size, CDOM) and phytoplankton composition associated with seasonal changes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In dieser Arbeit werden optische Filterarrays für hochqualitative spektroskopische Anwendungen im sichtbaren (VIS) Wellenlängenbereich untersucht. Die optischen Filter, bestehend aus Fabry-Pérot (FP)-Filtern für hochauflösende miniaturisierte optische Nanospektrometer, basieren auf zwei hochreflektierenden dielektrischen Spiegeln und einer zwischenliegenden Resonanzkavität aus Polymer. Jeder Filter erlaubt einem schmalbandigem spektralen Band (in dieser Arbeit Filterlinie genannt) ,abhängig von der Höhe der Resonanzkavität, zu passieren. Die Effizienz eines solchen optischen Filters hängt von der präzisen Herstellung der hochselektiven multispektralen Filterfelder von FP-Filtern mittels kostengünstigen und hochdurchsatz Methoden ab. Die Herstellung der multiplen Spektralfilter über den gesamten sichtbaren Bereich wird durch einen einzelnen Prägeschritt durch die 3D Nanoimprint-Technologie mit sehr hoher vertikaler Auflösung auf einem Substrat erreicht. Der Schlüssel für diese Prozessintegration ist die Herstellung von 3D Nanoimprint-Stempeln mit den gewünschten Feldern von Filterkavitäten. Die spektrale Sensitivität von diesen effizienten optischen Filtern hängt von der Genauigkeit der vertikalen variierenden Kavitäten ab, die durch eine großflächige ‚weiche„ Nanoimprint-Technologie, UV oberflächenkonforme Imprint Lithographie (UV-SCIL), ab. Die Hauptprobleme von UV-basierten SCIL-Prozessen, wie eine nichtuniforme Restschichtdicke und Schrumpfung des Polymers ergeben Grenzen in der potenziellen Anwendung dieser Technologie. Es ist sehr wichtig, dass die Restschichtdicke gering und uniform ist, damit die kritischen Dimensionen des funktionellen 3D Musters während des Plasmaätzens zur Entfernung der Restschichtdicke kontrolliert werden kann. Im Fall des Nanospektrometers variieren die Kavitäten zwischen den benachbarten FP-Filtern vertikal sodass sich das Volumen von jedem einzelnen Filter verändert , was zu einer Höhenänderung der Restschichtdicke unter jedem Filter führt. Das volumetrische Schrumpfen, das durch den Polymerisationsprozess hervorgerufen wird, beeinträchtigt die Größe und Dimension der gestempelten Polymerkavitäten. Das Verhalten des großflächigen UV-SCIL Prozesses wird durch die Verwendung von einem Design mit ausgeglichenen Volumen verbessert und die Prozessbedingungen werden optimiert. Das Stempeldesign mit ausgeglichen Volumen verteilt 64 vertikal variierenden Filterkavitäten in Einheiten von 4 Kavitäten, die ein gemeinsames Durchschnittsvolumen haben. Durch die Benutzung der ausgeglichenen Volumen werden einheitliche Restschichtdicken (110 nm) über alle Filterhöhen erhalten. Die quantitative Analyse der Polymerschrumpfung wird in iii lateraler und vertikaler Richtung der FP-Filter untersucht. Das Schrumpfen in vertikaler Richtung hat den größten Einfluss auf die spektrale Antwort der Filter und wird durch die Änderung der Belichtungszeit von 12% auf 4% reduziert. FP Filter die mittels des Volumengemittelten Stempels und des optimierten Imprintprozesses hergestellt wurden, zeigen eine hohe Qualität der spektralen Antwort mit linearer Abhängigkeit zwischen den Kavitätshöhen und der spektralen Position der zugehörigen Filterlinien.