6 resultados para Classification time
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
This work proposes a system for classification of industrial steel pieces by means of magnetic nondestructive device. The proposed classification system presents two main stages, online system stage and off-line system stage. In online stage, the system classifies inputs and saves misclassification information in order to perform posterior analyses. In the off-line optimization stage, the topology of a Probabilistic Neural Network is optimized by a Feature Selection algorithm combined with the Probabilistic Neural Network to increase the classification rate. The proposed Feature Selection algorithm searches for the signal spectrogram by combining three basic elements: a Sequential Forward Selection algorithm, a Feature Cluster Grow algorithm with classification rate gradient analysis and a Sequential Backward Selection. Also, a trash-data recycling algorithm is proposed to obtain the optimal feedback samples selected from the misclassified ones.
Resumo:
L. Antonangelo, F. S. Vargas, M. M. P. Acencio, A. P. Cora, L. R. Teixeira, E. H. Genofre and R. K. B. Sales Effect of temperature and storage time on cellular analysis of fresh pleural fluid samples Objective: Despite the methodological variability in preparation techniques for pleural fluid cytology, it is fundamental that the cells should be preserved, permitting adequate morphological classification. We evaluated numerical and morphological changes in pleural fluid specimens processed after storage at room temperature or under refrigeration. Methods: Aliquots of pleural fluid from 30 patients, collected in ethylenediaminetetraacetic acid-coated tubes and maintained at room temperature (21 degrees C) or refrigeration (4 degrees C) were evaluated after 2 and 6 hours and 1, 2, 3, 4, 7 and 14 days. Evaluation of cytomorphology and global and percentage counts of leucocytes, macrophages and mesothelial cells were included. Results: The samples had quantitative cellular variations from day 3 or 4 onwards, depending on the storage conditions. Morphological alterations occurred earlier in samples maintained at room temperature (day 2) than in those under refrigeration (day 4). Conclusions: This study confirms that storage time and temperature are potential pre-analytical causes of error in pleural fluid cytology.
Resumo:
We present a detailed study of carbon-enhanced metal-poor (CEMP) stars, based on high-resolution spectroscopic observations of a sample of 18 stars. The stellar spectra for this sample were obtained at the 4.2 m William Herschel Telescope in 2001 and 2002, using the Utrecht Echelle Spectrograph, at a resolving power R similar to 52 000 and S/N similar to 40, covering the wavelength range lambda lambda 3700-5700 angstrom. The atmospheric parameters determined for this sample indicate temperatures ranging from 4750 K to 7100 K, log g from 1.5 to 4.3, and metallicities -3.0 <= [Fe/H]<=-1.7. Elemental abundances for C, Na, Mg, Sc, Ti, Cr, Cu, Zn, Sr, Y, Zr, Ba, La, Ce, Nd, Sm, Eu, Gd, Dy are determined. Abundances for an additional 109 stars were taken from the literature and combined with the data of our sample. The literature sample reveals a lack of reliable abundance estimates for species that might be associated with the r-process elements for about 67% of CEMP stars, preventing a complete understanding of this class of stars, since [Ba/Eu] ratios are used to classify them. Although eight stars in our observed sample are also found in the literature sample, Eu abundances or limits are determined for four of these stars for the first time. From the observed correlations between C, Ba, and Eu, we argue that the CEMP-r/s class has the same astronomical origin as CEMP-s stars, highlighting the need for a more complete understanding of Eu production.
Resumo:
Purpose: To evaluate the retinal nerve fiber layer measurements with time-domain (TD) and spectral-domain (SD) optical coherence tomography (OCT), and to test the diagnostic ability of both technologies in glaucomatous patients with asymmetric visual hemifield loss. Methods: 36 patients with primary open-angle glaucoma with visual field loss in one hemifield (affected) and absent loss in the other (non-affected), and 36 age-matched healthy controls had the study eye imaged with Stratus-OCT (Carl Zeiss Meditec Inc., Dublin, California, USA) and 3 D OCT-1000 (Topcon, Tokyo, Japan). Peripapillary retinal nerve fiber layer measurements and normative classification were recorded. Total deviation values were averaged in each hemifield (hemifield mean deviation) for each subject. Visual field and retinal nerve fiber layer "asymmetry indexes" were calculated as the ratio between affected versus non-affected hemifields and corresponding hemiretinas. Results: Retinal nerve fiber layer measurements in non-affected hemifields (mean [SD] 87.0 [17.1] mu m and 84.3 [20.2] mu m, for TD and SD-OCT, respectively) were thinner than in controls (119.0 [12.2] mu m and 117.0 [17.7] mu m, P<0.001). The optical coherence tomography normative database classified 42% and 67% of hemiretinas corresponding to non-affected hemifields as abnormal in TD and SD-OCT, respectively (P=0.01). Retinal nerve fiber layer measurements were consistently thicker with TD compared to SD-OCT. Retinal nerve fiber layer thickness asymmetry index was similar in TD (0.76 [0.17]) and SD-OCT (0.79 [0.12]) and significantly greater than the visual field asymmetry index (0.36 [0.20], P<0.001). Conclusions: Normal hemifields of glaucoma patients had thinner retinal nerve fiber layer than healthy eyes, as measured by TD and SD-OCT. Retinal nerve fiber layer measurements were thicker with TD than SD-OCT. SD-OCT detected abnormal retinal nerve fiber layer thickness more often than TD-OCT.
Resumo:
Gravity Recovery and Climate Experiment (GRACE) mission is dedicated to measuring temporal variations of the Earth's gravity field. In this study, the Stokes coefficients made available by Groupe de Recherche en Géodésie Spatiale (GRGS) at a 10-day interval were converted into equivalent water height (EWH) for a ~4-year period in the Amazon basin (from July-2002 to May-2006). The seasonal amplitudes of EWH signal are the largest on the surface of Earth and reach ~ 1250mm at that basin's center. Error budget represents ~130 mm of EWH, including formal errors on Stokes coefficient, leakage errors (12 ~ 21 mm) and spectrum truncation (10 ~ 15 mm). Comparison between in situ river level time series measured at 233 ground-based hydrometric stations (HS) in the Amazon basin and vertically-integrated EWH derived from GRACE is carried out in this paper. Although EWH and HS measure different water bodies, in most of the cases a high correlation (up to ~80%) is detected between the HS series and EWH series at the same site. This correlation allows adjusting linear relationships between in situ and GRACE-based series for the major tributaries of the Amazon river. The regression coefficients decrease from up to down stream along the rivers reaching the theoretical value 1 at the Amazon's mouth in the Atlantic Ocean. The variation of the regression coefficients versus the distance from estuary is analysed for the largest rivers in the basin. In a second step, a classification of the proportionality between in situ and GRACE time-series is proposed.
Resumo:
The ubiquity of time series data across almost all human endeavors has produced a great interest in time series data mining in the last decade. While dozens of classification algorithms have been applied to time series, recent empirical evidence strongly suggests that simple nearest neighbor classification is exceptionally difficult to beat. The choice of distance measure used by the nearest neighbor algorithm is important, and depends on the invariances required by the domain. For example, motion capture data typically requires invariance to warping, and cardiology data requires invariance to the baseline (the mean value). Similarly, recent work suggests that for time series clustering, the choice of clustering algorithm is much less important than the choice of distance measure used.In this work we make a somewhat surprising claim. There is an invariance that the community seems to have missed, complexity invariance. Intuitively, the problem is that in many domains the different classes may have different complexities, and pairs of complex objects, even those which subjectively may seem very similar to the human eye, tend to be further apart under current distance measures than pairs of simple objects. This fact introduces errors in nearest neighbor classification, where some complex objects may be incorrectly assigned to a simpler class. Similarly, for clustering this effect can introduce errors by “suggesting” to the clustering algorithm that subjectively similar, but complex objects belong in a sparser and larger diameter cluster than is truly warranted.We introduce the first complexity-invariant distance measure for time series, and show that it generally produces significant improvements in classification and clustering accuracy. We further show that this improvement does not compromise efficiency, since we can lower bound the measure and use a modification of triangular inequality, thus making use of most existing indexing and data mining algorithms. We evaluate our ideas with the largest and most comprehensive set of time series mining experiments ever attempted in a single work, and show that complexity-invariant distance measures can produce improvements in classification and clustering in the vast majority of cases.