979 resultados para Methods : Statistical
Resumo:
PURPOSE: The main goal of this study was to develop and compare two different techniques for classification of specific types of corneal shapes when Zernike coefficients are used as inputs. A feed-forward artificial Neural Network (NN) and discriminant analysis (DA) techniques were used. METHODS: The inputs both for the NN and DA were the first 15 standard Zernike coefficients for 80 previously classified corneal elevation data files from an Eyesys System 2000 Videokeratograph (VK), installed at the Departamento de Oftalmologia of the Escola Paulista de Medicina, São Paulo. The NN had 5 output neurons which were associated with 5 typical corneal shapes: keratoconus, with-the-rule astigmatism, against-the-rule astigmatism, "regular" or "normal" shape and post-PRK. RESULTS: The NN and DA responses were statistically analyzed in terms of precision ([true positive+true negative]/total number of cases). Mean overall results for all cases for the NN and DA techniques were, respectively, 94% and 84.8%. CONCLUSION: Although we used a relatively small database, results obtained in the present study indicate that Zernike polynomials as descriptors of corneal shape may be a reliable parameter as input data for diagnostic automation of VK maps, using either NN or DA.
Resumo:
Recently, kernel-based Machine Learning methods have gained great popularity in many data analysis and data mining fields: pattern recognition, biocomputing, speech and vision, engineering, remote sensing etc. The paper describes the use of kernel methods to approach the processing of large datasets from environmental monitoring networks. Several typical problems of the environmental sciences and their solutions provided by kernel-based methods are considered: classification of categorical data (soil type classification), mapping of environmental and pollution continuous information (pollution of soil by radionuclides), mapping with auxiliary information (climatic data from Aral Sea region). The promising developments, such as automatic emergency hot spot detection and monitoring network optimization are discussed as well.
Resumo:
The characterization and grading of glioma tumors, via image derived features, for diagnosis, prognosis, and treatment response has been an active research area in medical image computing. This paper presents a novel method for automatic detection and classification of glioma from conventional T2 weighted MR images. Automatic detection of the tumor was established using newly developed method called Adaptive Gray level Algebraic set Segmentation Algorithm (AGASA).Statistical Features were extracted from the detected tumor texture using first order statistics and gray level co-occurrence matrix (GLCM) based second order statistical methods. Statistical significance of the features was determined by t-test and its corresponding p-value. A decision system was developed for the grade detection of glioma using these selected features and its p-value. The detection performance of the decision system was validated using the receiver operating characteristic (ROC) curve. The diagnosis and grading of glioma using this non-invasive method can contribute promising results in medical image computing
Resumo:
Soil organic matter (SOM) plays an important role in physical, chemical and biological properties of soil. Therefore, the amount of SOM is important for soil management for sustainable agriculture. The objective of this work was to evaluate the amount of SOM in oxisols by different methods and compare them, using principal component analysis, regarding their limitations. The methods used in this work were Walkley-Black, elemental analysis, total organic carbon (TOC) and thermogravimetry. According to our results, TOC and elemental analysis were the most satisfactory methods for carbon quantification, due to their better accuracy and reproducibility.
Resumo:
Aims. We create a catalogue of simulated fossil groups and study their properties, in particular the merging histories of their first-ranked galaxies. We compare the simulated fossil group properties with those of both simulated non-fossil and observed fossil groups. Methods. Using simulations and a mock galaxy catalogue, we searched for massive (>5 x 10(13) h(-1) M-circle dot) fossil groups in the Millennium Simulation Galaxy Catalogue. In addition, we attempted to identify observed fossil groups in the Sloan Digital Sky Survey Data Release 6 using identical selection criteria. Results. Our predictions on the basis of the simulation data are: (a) fossil groups comprise about 5.5% of the total population of groups/clusters with masses larger than 5 x 10(13) h(-1) M-circle dot. This fraction is consistent with the fraction of fossil groups identified in the SDSS, after all observational biases have been taken into account; (b) about 88% of the dominant central objects in fossil groups are elliptical galaxies that have a median R-band absolute magnitude of similar to-23.5-5 log h, which is typical of the observed fossil groups known in the literature; (c) first-ranked galaxies of systems with M > 5 x 10(13) h(-1) M-circle dot, regardless of whether they are either fossil or non-fossil, are mainly formed by gas-poor mergers; (d) although fossil groups, in general, assembled most of their virial masses at higher redshifts in comparison with non-fossil groups, first-ranked galaxies in fossil groups merged later, i.e. at lower redshifts, compared with their non-fossil-group counterparts. Conclusions. We therefore expect to observe a number of luminous galaxies in the centres of fossil groups that show signs of a recent major merger.
Resumo:
Context. Fossil systems are defined to be X- ray bright galaxy groups ( or clusters) with a two- magnitude difference between their two brightest galaxies within half the projected virial radius, and represent an interesting extreme of the population of galaxy agglomerations. However, the physical conditions and processes leading to their formation are still poorly constrained. Aims. We compare the outskirts of fossil systems with that of normal groups to understand whether environmental conditions play a significant role in their formation. We study the groups of galaxies in both, numerical simulations and observations. Methods. We use a variety of statistical tools including the spatial cross- correlation function and the local density parameter Delta(5) to probe differences in the density and structure of the environments of "" normal"" and "" fossil"" systems in the Millennium simulation. Results. We find that the number density of galaxies surrounding fossil systems evolves from greater than that observed around normal systems at z = 0.69, to lower than the normal systems by z = 0. Both fossil and normal systems exhibit an increment in their otherwise radially declining local density measure (Delta(5)) at distances of order 2.5 r(vir) from the system centre. We show that this increment is more noticeable for fossil systems than normal systems and demonstrate that this difference is linked to the earlier formation epoch of fossil groups. Despite the importance of the assembly time, we show that the environment is different for fossil and non- fossil systems with similar masses and formation times along their evolution. We also confirm that the physical characteristics identified in the Millennium simulation can also be detected in SDSS observations. Conclusions. Our results confirm the commonly held belief that fossil systems assembled earlier than normal systems but also show that the surroundings of fossil groups could be responsible for the formation of their large magnitude gap.
Resumo:
Aims. We derive lists of proper-motions and kinematic membership probabilities for 49 open clusters and possible open clusters in the zone of the Bordeaux PM2000 proper motion catalogue (+ 11 degrees <= delta <= + 18 degrees). We test different parametrisations of the proper motion and position distribution functions and select the most successful one. In the light of those results, we analyse some objects individually. Methods. We differenciate between cluster and field member stars, and assign membership probabilities, by applying a new and fully automated method based on both parametrisations of the proper motion and position distribution functions, and genetic algorithm optimization heuristics associated with a derivative-based hill climbing algorithm for the likelihood optimization. Results. We present a catalogue comprising kinematic parameters and associated membership probability lists for 49 open clusters and possible open clusters in the Bordeaux PM2000 catalogue region. We note that this is the first determination of proper motions for five open clusters. We confirm the non-existence of two kinematic populations in the region of 15 previously suspected non-existent objects.
Resumo:
Aims. In this work, we describe the pipeline for the fast supervised classification of light curves observed by the CoRoT exoplanet CCDs. We present the classification results obtained for the first four measured fields, which represent a one-year in-orbit operation. Methods. The basis of the adopted supervised classification methodology has been described in detail in a previous paper, as is its application to the OGLE database. Here, we present the modifications of the algorithms and of the training set to optimize the performance when applied to the CoRoT data. Results. Classification results are presented for the observed fields IRa01, SRc01, LRc01, and LRa01 of the CoRoT mission. Statistics on the number of variables and the number of objects per class are given and typical light curves of high-probability candidates are shown. We also report on new stellar variability types discovered in the CoRoT data. The full classification results are publicly available.
Resumo:
Aims. A model-independent reconstruction of the cosmic expansion rate is essential to a robust analysis of cosmological observations. Our goal is to demonstrate that current data are able to provide reasonable constraints on the behavior of the Hubble parameter with redshift, independently of any cosmological model or underlying gravity theory. Methods. Using type Ia supernova data, we show that it is possible to analytically calculate the Fisher matrix components in a Hubble parameter analysis without assumptions about the energy content of the Universe. We used a principal component analysis to reconstruct the Hubble parameter as a linear combination of the Fisher matrix eigenvectors (principal components). To suppress the bias introduced by the high redshift behavior of the components, we considered the value of the Hubble parameter at high redshift as a free parameter. We first tested our procedure using a mock sample of type Ia supernova observations, we then applied it to the real data compiled by the Sloan Digital Sky Survey (SDSS) group. Results. In the mock sample analysis, we demonstrate that it is possible to drastically suppress the bias introduced by the high redshift behavior of the principal components. Applying our procedure to the real data, we show that it allows us to determine the behavior of the Hubble parameter with reasonable uncertainty, without introducing any ad-hoc parameterizations. Beyond that, our reconstruction agrees with completely independent measurements of the Hubble parameter obtained from red-envelope galaxies.
Resumo:
Background & Aims: An elevated transferrin saturation is the earliest phenotypic abnormality in hereditary hemochromatosis. Determination of transferrin saturation remains the most useful noninvasive screening test for affected individuals, but there is debate as to the appropriate screening level. The aims of this study were to estimate the mean transferrin saturation in hemochromatosis heterozygotes and normal individuals and to evaluate potential transferrin saturation screening levels. Methods: Statistical mixture modeling was applied to data from a survey of asymptomatic Australians to estimate the mean transferrin saturation in hemochromatosis heterozygotes and normal individuals. To evaluate potential transferrin saturation screening levels, modeling results were compared with data from identified hemochromatosis heterozygotes and homozygotes. Results: After removal of hemochromatosis homozygotes, two populations of transferrin saturation were identified in asymptomatic Australians (P < 0.01). In men, 88.2% of the truncated sample had a lower mean transferrin saturation of 24.1%, whereas 11.8% had an increased mean transferrin saturation of 37.3%. Similar results were found in women, A transferrin saturation threshold of 45% identified 98% of homozygotes without misidentifying any normal individuals. Conclusions: The results confirm that hemochromatosis heterozygotes form a distinct transferrin saturation subpopulation and support the use of transferrin saturation as an inexpensive screening test for hemochromatosis. In practice, a fasting transferrin saturation of greater than or equal to 45% identifies virtually all affected homozygous subjects without necessitating further investigation of unaffected normal individuals.
Resumo:
Aim: To look at the characteristics of Postgraduate Hospital Educational Environment Measure (PHEEM) using data from the UK, Brazil, Chile and the Netherlands, and to examine the reliability and characteristics of PHEEM, especially how the three PHEEM subscales fitted with factors derived statistically from the data sets. Methods: Statistical analysis of PHEEM scores from 1563 sets of data, using reliability analysis, exploratory factor analysis and correlations of factors derived with the three defined PHEEM subscales. Results: PHEEM was very reliable with an overall Cronbach`s alpha of 0.928. Three factors were derived by exploratory factor analysis. Factor One correlated most strongly with the teaching subscale (R=0.802), Factor Two correlated most strongly with the role autonomy subscale (R=0.623) and Factor Three correlated most strongly with the social support subscale (R=0.538). Conclusions: PHEEM is a multi-dimensional instrument. Overall, it is very reliable. There is a good fit of the three defined subscales, derived by qualitative methods, with the three principal factors derived from the data by exploratory factor analysis.
Resumo:
Background: Estimates of the performance of carbohydrate deficient transferrin (CDT) and gamma glutamyltransferase (GGT) as markers of alcohol consumption have varied widely. Studies have differed in design and subject characteristics. The WHO/ISBRA Collaborative Study allows assessment and comparison of CDT, GGT, and aspartate aminotransferase (AST) as markers of drinking in a large, well-characterized, multicenter sample. Methods: A total of 1863 subjects were recruited from five countries (Australia, Brazil, Canada, Finland, and Japan). Recruitment was stratified by alcohol use, age, and sex. Demographic characteristics, alcohol consumption, and presence of ICD-10 dependence were recorded using an interview schedule based on the AUDADIS, CDT was assayed using CDTect(TM) and GGT and AST by standard methods. Statistical techniques included receiver operating characteristic (ROC) analysis. Multiple regression was used to measure the impact of factors other than alcohol on test performance. Results: CDT and GGT had comparable performance on ROC analysis, with AST performing slightly less well. CDT was a slightly but significantly better marker of high-risk consumption in men. All were more effective for detection of high-risk rather than intermediate-risk drinking. CDT and GGT levels were influenced by body mass index, sex, age, and smoking status. Conclusions: CDT was little better than GGT in detecting high- or intermediate-risk alcohol consumption in this large, multicenter, predominantly community-based sample. As the two tests are relatively independent of each other, their combination is likely to provide better performance than either test alone, Test interpretation should take account sex, age. and body mass index.
Resumo:
In the initial stage of this work, two potentiometric methods were used to determine the salt (sodium chloride) content in bread and dough samples from several cities in the north of Portugal. A reference method (potentiometric precipitation titration) and a newly developed ion-selective chloride electrode (ISE) were applied. Both methods determine the sodium chloride content through the quantification of chloride. To evaluate the accuracy of the ISE, bread and respective dough samples were analyzed by both methods. Statistical analysis (0.05 significance level) indicated that the results of these methods did not differ significantly. Therefore the ISE is an adequate alternative for the determination of chloride in the analyzed samples. To compare the results of these chloride-based methods with a sodium-based method, sodium was quantified in the same samples by a reference method (atomic absorption spectrometry). Significant differences between the results were verified. In several cases the sodium chloride content exceeded the legal limit when the chloride-based methods were used, but when the sodium-based method was applied this was not the case. This could lead to the erroneous application of fines and therefore the authorities should supply additional information regarding the analytical procedure for this particular control.
Resumo:
Introduction: Hearing loss h sone raised impact in the development and academic progress of a child. In several developed countries, early detection is part of the national health plan through universal neonatal hearing screening (UNHS) and also with school hearing screening programs (SHSP), but only a few have published national data and revised protocols. Currently in Portugal, the UNHS is implemented in the main district hospitals but not the SHPS, as well we still do not make use of concrete data nor publication of studies on the national reality. Objectives: The incidence of the hearing loss and of otological problems was studied in school communities in the north of the country with 2550 participants between 3 and 17 years old. Methods: Statistical data collected within the schools with a standard auditory hearing screening protocol. All participants were evaluated with the same protocol, an audiological anamnesis, otoscopy and audiometric exam screening (500, 1000, 2000 and 4000 Hz) were fulfilled. Results: Different otological problems were identified and the audiometric screening exam counted auditory thresholds that outpointed uni and bilateral hearing loss in about 5.7% of the cases. Conclusions: The study has demonstrated that auditory school screening should take place as early as possible and be part of the primary health care to identify and direct children to appropriate rehabilitation, education and attendance. Thus, reducing high costs with late treatment.
Resumo:
Dissertação de mestrado em Engenharia de Sistemas