915 resultados para Automated estimator
Resumo:
37
Resumo:
PURPOSE: To evaluate the sensitivity and specificity of machine learning classifiers (MLCs) for glaucoma diagnosis using Spectral Domain OCT (SD-OCT) and standard automated perimetry (SAP). METHODS: Observational cross-sectional study. Sixty two glaucoma patients and 48 healthy individuals were included. All patients underwent a complete ophthalmologic examination, achromatic standard automated perimetry (SAP) and retinal nerve fiber layer (RNFL) imaging with SD-OCT (Cirrus HD-OCT; Carl Zeiss Meditec Inc., Dublin, California). Receiver operating characteristic (ROC) curves were obtained for all SD-OCT parameters and global indices of SAP. Subsequently, the following MLCs were tested using parameters from the SD-OCT and SAP: Bagging (BAG), Naive-Bayes (NB), Multilayer Perceptron (MLP), Radial Basis Function (RBF), Random Forest (RAN), Ensemble Selection (ENS), Classification Tree (CTREE), Ada Boost M1(ADA),Support Vector Machine Linear (SVML) and Support Vector Machine Gaussian (SVMG). Areas under the receiver operating characteristic curves (aROC) obtained for isolated SAP and OCT parameters were compared with MLCs using OCT+SAP data. RESULTS: Combining OCT and SAP data, MLCs' aROCs varied from 0.777(CTREE) to 0.946 (RAN).The best OCT+SAP aROC obtained with RAN (0.946) was significantly larger the best single OCT parameter (p<0.05), but was not significantly different from the aROC obtained with the best single SAP parameter (p=0.19). CONCLUSION: Machine learning classifiers trained on OCT and SAP data can successfully discriminate between healthy and glaucomatous eyes. The combination of OCT and SAP measurements improved the diagnostic accuracy compared with OCT data alone.
Resumo:
Aims. In this work, we describe the pipeline for the fast supervised classification of light curves observed by the CoRoT exoplanet CCDs. We present the classification results obtained for the first four measured fields, which represent a one-year in-orbit operation. Methods. The basis of the adopted supervised classification methodology has been described in detail in a previous paper, as is its application to the OGLE database. Here, we present the modifications of the algorithms and of the training set to optimize the performance when applied to the CoRoT data. Results. Classification results are presented for the observed fields IRa01, SRc01, LRc01, and LRa01 of the CoRoT mission. Statistics on the number of variables and the number of objects per class are given and typical light curves of high-probability candidates are shown. We also report on new stellar variability types discovered in the CoRoT data. The full classification results are publicly available.
Resumo:
We develop an automated spectral synthesis technique for the estimation of metallicities ([Fe/H]) and carbon abundances ([C/Fe]) for metal-poor stars, including carbon-enhanced metal-poor stars, for which other methods may prove insufficient. This technique, autoMOOG, is designed to operate on relatively strong features visible in even low- to medium-resolution spectra, yielding results comparable to much more telescope-intensive high-resolution studies. We validate this method by comparison with 913 stars which have existing high-resolution and low- to medium-resolution to medium-resolution spectra, and that cover a wide range of stellar parameters. We find that at low metallicities ([Fe/H] less than or similar to -2.0), we successfully recover both the metallicity and carbon abundance, where possible, with an accuracy of similar to 0.20 dex. At higher metallicities, due to issues of continuum placement in spectral normalization done prior to the running of autoMOOG, a general underestimate of the overall metallicity of a star is seen, although the carbon abundance is still successfully recovered. As a result, this method is only recommended for use on samples of stars of known sufficiently low metallicity. For these low- metallicity stars, however, autoMOOG performs much more consistently and quickly than similar, existing techniques, which should allow for analyses of large samples of metal-poor stars in the near future. Steps to improve and correct the continuum placement difficulties are being pursued.
Resumo:
The main purpose of this paper is to present architecture of automated system that allows monitoring and tracking in real time (online) the possible occurrence of faults and electromagnetic transients observed in primary power distribution networks. Through the interconnection of this automated system to the utility operation center, it will be possible to provide an efficient tool that will assist in decisionmaking by the Operation Center. In short, the desired purpose aims to have all tools necessary to identify, almost instantaneously, the occurrence of faults and transient disturbances in the primary power distribution system, as well as to determine its respective origin and probable location. The compilations of results from the application of this automated system show that the developed techniques provide accurate results, identifying and locating several occurrences of faults observed in the distribution system.
Resumo:
Leaf wetness duration (LWD) is a key parameter in agricultural meteorology since it is related to epidemiology of many important crops, controlling pathogen infection and development rates. Because LWD is not widely measured, several methods have been developed to estimate it from weather data. Among the models used to estimate LWD, those that use physical principles of dew formation and dew and/or rain evaporation have shown good portability and sufficiently accurate results, but their complexity is a disadvantage for operational use. Alternatively, empirical models have been used despite their limitations. The simplest empirical models use only relative humidity data. The objective of this study was to evaluate the performance of three RH-based empirical models to estimate LWD in four regions around the world that have different climate conditions. Hourly LWD, air temperature, and relative humidity data were obtained from Ames, IA (USA), Elora, Ontario (Canada), Florence, Toscany (Italy), and Piracicaba, Sao Paulo State (Brazil). These data were used to evaluate the performance of the following empirical LWD estimation models: constant RH threshold (RH >= 90%); dew point depression (DPD); and extended RH threshold (EXT_RH). Different performance of the models was observed in the four locations. In Ames, Elora and Piracicaba, the RH >= 90% and DPD models underestimated LWD, whereas in Florence these methods overestimated LWD, especially for shorter wet periods. When the EXT_RH model was used, LWD was overestimated for all locations, with a significant increase in the errors. In general, the RH >= 90% model performed best, presenting the highest general fraction of correct estimates (F(C)), between 0.87 and 0.92, and the lowest false alarm ratio (F(AR)), between 0.02 and 0.31. The use of specific thresholds for each location improved accuracy of the RH model substantially, even when independent data were used; MAE ranged from 1.23 to 1.89 h, which is very similar to errors obtained with published physical models for LWD estimation. Based on these results, we concluded that, if calibrated locally, LWD can be estimated with acceptable accuracy by RH above a specific threshold, and that the EXT_RH method was unsuitable for estimating LWD at the locations used in this study. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
We present a novel nonparametric density estimator and a new data-driven bandwidth selection method with excellent properties. The approach is in- spired by the principles of the generalized cross entropy method. The pro- posed density estimation procedure has numerous advantages over the tra- ditional kernel density estimator methods. Firstly, for the first time in the nonparametric literature, the proposed estimator allows for a genuine incor- poration of prior information in the density estimation procedure. Secondly, the approach provides the first data-driven bandwidth selection method that is guaranteed to provide a unique bandwidth for any data. Lastly, simulation examples suggest the proposed approach outperforms the current state of the art in nonparametric density estimation in terms of accuracy and reliability.
Resumo:
This paper reports on a system for automated agent negotiation, based on a formal and executable approach to capture the behavior of parties involved in a negotiation. It uses the JADE agent framework, and its major distinctive feature is the use of declarative negotiation strategies. The negotiation strategies are expressed in a declarative rules language, defeasible logic, and are applied using the implemented system DR-DEVICE. The key ideas and the overall system architecture are described, and a particular negotiation case is presented in detail.
Resumo:
An automated method for extracting brain volumes from three commonly acquired three-dimensional (3D) MR images (proton density, T1 weighted, and T2-weighted) of the human head is described. The procedure is divided into four levels: preprocessing, segmentation, scalp removal, and postprocessing. A user-provided reference point is the sole operator-dependent input required, The method's parameters were first optimized and then fixed and applied to 30 repeat data sets from 15 normal older adult subjects to investigate its reproducibility. Percent differences between total brain volumes (TBVs) for the subjects' repeated data sets ranged from .5% to 2.2%. We conclude that the method is both robust and reproducible and has the potential for wide application.
Resumo:
The fabrication of heavy-duty printer heads involves a great deal of grinding work. Previously in the printer manufacturing industry, four grinding procedures were manually conducted in four grinding machines, respectively. The productivity of the whole grinding process was low due to the long loading time. Also, the machine floor space occupation was large because of the four separate grinding machines. The manual operation also caused inconsistent quality. This paper reports the system and process development of a highly integrated and automated high-speed grinding system for printer heads. The developed system, which is believed to be the first of its kind, not only produces printer heads of consistently good quality, but also significantly reduces the cycle time and machine floor space occupation.
Resumo:
A sensitive and automated method is described for determination of rifampicin in plasma samples for therapeutic drug monitoring by in-tube solid-phase microextraction coupled with liquid chromatography (in-tube SPME/LC). Important factors in the optimization of in-tube SPME are discussed, such as coating type, sample pH, sample draw/eject volume, number of draw/eject cycles, and draw/eject flow rate. Analyte pre-concentrated in the polyethylene glycol phase was directly transferred to the liquid chromatographic column by percolation of the mobile phase, without carryover. The method was linear over the 0.1-100 mu g/mL range, with a linear coefficient value (r(2)) of 0.998. The inter-assay precision presented coefficient of variation <= 1.7%. The effectiveness and practicability of the proposed method are proven by analysis of plasma samples from ageing patients undergoing therapy with rifampicin. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Purpose: To evaluate the ability of the GDx Variable Corneal Compensation (VCC) Guided Progression Analysis (GPA) software for detecting glaucomatous progression. Design: Observational cohort study. Participants: The study included 453 eyes from 252 individuals followed for an average of 46 +/- 14 months as part of the Diagnostic Innovations in Glaucoma Study. At baseline, 29% of the eyes were classified as glaucomatous, 67% of the eyes were classified as suspects, and 5% of the eyes were classified as healthy. Methods: Images were obtained annually with the GDx VCC and analyzed for progression using the Fast Mode of the GDx GPA software. Progression using conventional methods was determined by the GPA software for standard automated achromatic perimetry (SAP) and by masked assessment of optic disc stereophotographs by expert graders. Main Outcome Measures: Sensitivity, specificity, and likelihood ratios (LRs) for detection of glaucoma progression using the GDx GPA were calculated with SAP and optic disc stereophotographs used as reference standards. Agreement among the different methods was reported using the AC(1) coefficient. Results: Thirty-four of the 431 glaucoma and glaucoma suspect eyes (8%) showed progression by SAP or optic disc stereophotographs. The GDx GPA detected 17 of these eyes for a sensitivity of 50%. Fourteen eyes showed progression only by the GDx GPA with a specificity of 96%. Positive and negative LRs were 12.5 and 0.5, respectively. None of the healthy eyes showed progression by the GDx GPA, with a specificity of 100% in this group. Inter-method agreement (AC1 coefficient and 95% confidence intervals) for non-progressing and progressing eyes was 0.96 (0.94-0.97) and 0.44 (0.28-0.61), respectively. Conclusions: The GDx GPA detected glaucoma progression in a significant number of cases showing progression by conventional methods, with high specificity and high positive LRs. Estimates of the accuracy for detecting progression suggest that the GDx GPA could be used to complement clinical evaluation in the detection of longitudinal change in glaucoma. Financial Disclosure(s): Proprietary or commercial disclosure may be found after the references. Ophthalmology 2010; 117: 462-470 (C) 2010 by the American Academy of Ophthalmology.
Resumo:
PURPOSE. To evaluate the relationship between pattern electroretinogram (PERG) amplitude, macular and retinal nerve fiber layer (RNFL) thickness by optical coherence tomography (OCT), and visual field (VF) loss on standard automated perimetry (SAP) in eyes with temporal hemianopia from chiasmal compression. METHODS. Forty-one eyes from 41 patients with permanent temporal VF defects from chiasmal compression and 41 healthy subjects underwent transient full-field and hemifield (temporal or nasal) stimulation PERG, SAP and time domain-OCT macular and RNFL thickness measurements. Comparisons were made using Student`s t-test. Deviation from normal VF sensitivity for the central 18 of VF was expressed in 1/Lambert units. Correlations between measurements were verified by linear regression analysis. RESULTS. PERG and OCT measurements were significantly lower in eyes with temporal hemianopia than in normal eyes. A significant correlation was found between VF sensitivity loss and fullfield or nasal, but not temporal, hemifield PERG amplitude. Likewise a significant correlation was found between VF sensitivity loss and most OCT parameters. No significant correlation was observed between OCT and PERG parameters, except for nasal hemifield amplitude. A significant correlation was observed between several macular and RNFL thickness parameters. CONCLUSIONS. In patients with chiasmal compression, PERG amplitude and OCT thickness measurements were significant related to VF loss, but not to each other. OCT and PERG quantify neuronal loss differently, but both technologies are useful in understanding structure-function relationship in patients with chiasmal compression. (ClinicalTrials.gov number, NCT00553761.) (Invest Ophthalmol Vis Sci. 2009; 50: 3535-3541) DOI:10.1167/iovs.08-3093
Resumo:
Concerns have been raised about the reproducibility of brachial artery reactivity (BAR), because subjective decisions regarding the location of interfaces may influence the measurement of very small changes in lumen diameter. We studied 120 consecutive patients with BAR to address if an automated technique could be applied, and if experience influenced reproducibility between two observers, one experienced and one inexperienced. Digital cineloops were measured automatically, using software that measures the leading edge of the endothelium and tracks this in sequential frames and also manually, where a set of three point-to-point measurements were averaged. There was a high correlation between automated and manual techniques for both observers, although less variability was present with expert readers. The limits of agreement overall for interobserver concordance were 0.13 +/-0.65 mm for the manual and 0.03 +/-0.74 mm for the automated measurement. For intraobserver concordance, the limits of agreement were -0.07 +/-0.38 mm for observer 1 and -0.16 +/-0.55 mm for observer 2. We concluded that BAR measurements were highly concordant between observers, although more concordant using the automated method, and that experience does affect concordance. Care must be taken to ensure that the same segments are measured between observers and serially.