903 resultados para Data selection
Resumo:
Within Data Envelopment Analysis, several alternative models allow for an environmental adjustment. The majority of them deliver divergent results. Decision makers face the difficult task of selecting the most suitable model. This study is performed to overcome this difficulty. By doing so, it fills a research gap. First, a two-step web-based survey is conducted. It aims (1) to identify the selection criteria, (2) to prioritize and weight the selection criteria with respect to the goal of selecting the most suitable model and (3) to collect the preferences about which model is preferable to fulfil each selection criterion. Second, Analytic Hierarchy Process is used to quantify the preferences expressed in the survey. Results show that the understandability, the applicability and the acceptability of the alternative models are valid selection criteria. The selection of the most suitable model depends on the preferences of the decision makers with regards to these criteria.
Resumo:
Geophysical data may provide crucial information about hydrological properties, states, and processes that are difficult to obtain by other means. Large data sets can be acquired over widely different scales in a minimally invasive manner and at comparatively low costs, but their effective use in hydrology makes it necessary to understand the fidelity of geophysical models, the assumptions made in their construction, and the links between geophysical and hydrological properties. Geophysics has been applied for groundwater prospecting for almost a century, but it is only in the last 20 years that it is regularly used together with classical hydrological data to build predictive hydrological models. A largely unexplored venue for future work is to use geophysical data to falsify or rank competing conceptual hydrological models. A promising cornerstone for such a model selection strategy is the Bayes factor, but it can only be calculated reliably when considering the main sources of uncertainty throughout the hydrogeophysical parameter estimation process. Most classical geophysical imaging tools tend to favor models with smoothly varying property fields that are at odds with most conceptual hydrological models of interest. It is thus necessary to account for this bias or use alternative approaches in which proposed conceptual models are honored at all steps in the model building process.
Resumo:
Alpine tree-line ecotones are characterized by marked changes at small spatial scales that may result in a variety of physiognomies. A set of alternative individual-based models was tested with data from four contrasting Pinus uncinata ecotones in the central Spanish Pyrenees to reveal the minimal subset of processes required for tree-line formation. A Bayesian approach combined with Markov chain Monte Carlo methods was employed to obtain the posterior distribution of model parameters, allowing the use of model selection procedures. The main features of real tree lines emerged only in models considering nonlinear responses in individual rates of growth or mortality with respect to the altitudinal gradient. Variation in tree-line physiognomy reflected mainly changes in the relative importance of these nonlinear responses, while other processes, such as dispersal limitation and facilitation, played a secondary role. Different nonlinear responses also determined the presence or absence of krummholz, in agreement with recent findings highlighting a different response of diffuse and abrupt or krummholz tree lines to climate change. The method presented here can be widely applied in individual-based simulation models and will turn model selection and evaluation in this type of models into a more transparent, effective, and efficient exercise.
Resumo:
Gene filtering is a useful preprocessing technique often applied to microarray datasets. However, it is no common practice because clear guidelines are lacking and it bears the risk of excluding some potentially relevant genes. In this work, we propose to model microarray data as a mixture of two Gaussian distributions that will allow us to obtain an optimal filter threshold in terms of the gene expression level.
Resumo:
OBJECTIVE: The objective was to determine the risk of stroke associated with subclinical hypothyroidism. DATA SOURCES AND STUDY SELECTION: Published prospective cohort studies were identified through a systematic search through November 2013 without restrictions in several databases. Unpublished studies were identified through the Thyroid Studies Collaboration. We collected individual participant data on thyroid function and stroke outcome. Euthyroidism was defined as TSH levels of 0.45-4.49 mIU/L, and subclinical hypothyroidism was defined as TSH levels of 4.5-19.9 mIU/L with normal T4 levels. DATA EXTRACTION AND SYNTHESIS: We collected individual participant data on 47 573 adults (3451 subclinical hypothyroidism) from 17 cohorts and followed up from 1972-2014 (489 192 person-years). Age- and sex-adjusted pooled hazard ratios (HRs) for participants with subclinical hypothyroidism compared to euthyroidism were 1.05 (95% confidence interval [CI], 0.91-1.21) for stroke events (combined fatal and nonfatal stroke) and 1.07 (95% CI, 0.80-1.42) for fatal stroke. Stratified by age, the HR for stroke events was 3.32 (95% CI, 1.25-8.80) for individuals aged 18-49 years. There was an increased risk of fatal stroke in the age groups 18-49 and 50-64 years, with a HR of 4.22 (95% CI, 1.08-16.55) and 2.86 (95% CI, 1.31-6.26), respectively (p trend 0.04). We found no increased risk for those 65-79 years old (HR, 1.00; 95% CI, 0.86-1.18) or ≥ 80 years old (HR, 1.31; 95% CI, 0.79-2.18). There was a pattern of increased risk of fatal stroke with higher TSH concentrations. CONCLUSIONS: Although no overall effect of subclinical hypothyroidism on stroke could be demonstrated, an increased risk in subjects younger than 65 years and those with higher TSH concentrations was observed.
Resumo:
PURPOSE: To improve coronary magnetic resonance angiography (MRA) by combining a two-dimensional (2D) spatially selective radiofrequency (RF) pulse with a T2 -preparation module ("2D-T2 -Prep"). METHODS: An adiabatic T2 -Prep was modified so that the first and last pulses were of differing spatial selectivity. The first RF pulse was replaced by a 2D pulse, such that a pencil-beam volume is excited. The last RF pulse remains nonselective, thus restoring the T2 -prepared pencil-beam, while tipping the (formerly longitudinal) magnetization outside of the pencil-beam into the transverse plane, where it is then spoiled. Thus, only a cylinder of T2 -prepared tissue remains for imaging. Numerical simulations were followed by phantom validation and in vivo coronary MRA, where the technique was quantitatively evaluated. Reduced field-of-view (rFoV) images were similarly studied. RESULTS: In vivo, full field-of-view 2D-T2 -Prep significantly improved vessel sharpness as compared to conventional T2 -Prep, without adversely affecting signal-to-noise (SNR) or contrast-to-noise ratios (CNR). It also reduced respiratory motion artifacts. In rFoV images, the SNR, CNR, and vessel sharpness decreased, although scan time reduction was 60%. CONCLUSION: When compared with conventional T2 -Prep, the 2D-T2 -Prep improves vessel sharpness and decreases respiratory ghosting while preserving both SNR and CNR. It may also acquire rFoV images for accelerated data acquisition.
Resumo:
Drying is a major step in the manufacturing process in pharmaceutical industries, and the selection of dryer and operating conditions are sometimes a bottleneck. In spite of difficulties, the bottlenecks are taken care of with utmost care due to good manufacturing practices (GMP) and industries' image in the global market. The purpose of this work is to research the use of existing knowledge for the selection of dryer and its operating conditions for drying of pharmaceutical materials with the help of methods like case-based reasoning and decision trees to reduce time and expenditure for research. The work consisted of two major parts as follows: Literature survey on the theories of spray dying, case-based reasoning and decision trees; working part includes data acquisition and testing of the models based on existing and upgraded data. Testing resulted in a combination of two models, case-based reasoning and decision trees, leading to more specific results when compared to conventional methods.
Resumo:
Objective. Recently, significant advances have been made in the early diagnosis of Alzheimer’s disease from EEG. However, choosing suitable measures is a challenging task. Among other measures, frequency Relative Power and loss of complexity have been used with promising results. In the present study we investigate the early diagnosis of AD using synchrony measures and frequency Relative Power on EEG signals, examining the changes found in different frequency ranges. Approach. We first explore the use of a single feature for computing the classification rate, looking for the best frequency range. Then, we present a multiple feature classification system that outperforms all previous results using a feature selection strategy. These two approaches are tested in two different databases, one containing MCI and healthy subjects (patients age: 71.9 ± 10.2, healthy subjects age: 71.7 ± 8.3), and the other containing Mild AD and healthy subjects (patients age: 77.6 ± 10.0; healthy subjects age: 69.4± 11.5). Main Results. Using a single feature to compute classification rates we achieve a performance of 78.33% for the MCI data set and of 97.56 % for Mild AD. Results are clearly improved using the multiple feature classification, where a classification rate of 95% is found for the MCI data set using 11 features, and 100% for the Mild AD data set using 4 features. Significance. The new features selection method described in this work may be a reliable tool that could help to design a realistic system that does not require prior knowledge of a patient's status. With that aim, we explore the standardization of features for MCI and Mild AD data sets with promising results.
Resumo:
Recent years have produced great advances in the instrumentation technology. The amount of available data has been increasing due to the simplicity, speed and accuracy of current spectroscopic instruments. Most of these data are, however, meaningless without a proper analysis. This has been one of the reasons for the overgrowing success of multivariate handling of such data. Industrial data is commonly not designed data; in other words, there is no exact experimental design, but rather the data have been collected as a routine procedure during an industrial process. This makes certain demands on the multivariate modeling, as the selection of samples and variables can have an enormous effect. Common approaches in the modeling of industrial data are PCA (principal component analysis) and PLS (projection to latent structures or partial least squares) but there are also other methods that should be considered. The more advanced methods include multi block modeling and nonlinear modeling. In this thesis it is shown that the results of data analysis vary according to the modeling approach used, thus making the selection of the modeling approach dependent on the purpose of the model. If the model is intended to provide accurate predictions, the approach should be different than in the case where the purpose of modeling is mostly to obtain information about the variables and the process. For industrial applicability it is essential that the methods are robust and sufficiently simple to apply. In this way the methods and the results can be compared and an approach selected that is suitable for the intended purpose. Differences in data analysis methods are compared with data from different fields of industry in this thesis. In the first two papers, the multi block method is considered for data originating from the oil and fertilizer industries. The results are compared to those from PLS and priority PLS. The third paper considers applicability of multivariate models to process control for a reactive crystallization process. In the fourth paper, nonlinear modeling is examined with a data set from the oil industry. The response has a nonlinear relation to the descriptor matrix, and the results are compared between linear modeling, polynomial PLS and nonlinear modeling using nonlinear score vectors.
Resumo:
Among the challenges of pig farming in today's competitive market, there is factor of the product traceability that ensures, among many points, animal welfare. Vocalization is a valuable tool to identify situations of stress in pigs, and it can be used in welfare records for traceability. The objective of this work was to identify stress in piglets using vocalization, calling this stress on three levels: no stress, moderate stress, and acute stress. An experiment was conducted on a commercial farm in the municipality of Holambra, São Paulo State , where vocalizations of twenty piglets were recorded during the castration procedure, and separated into two groups: without anesthesia and local anesthesia with lidocaine base. For the recording of acoustic signals, a unidirectional microphone was connected to a digital recorder, in which signals were digitized at a frequency of 44,100 Hz. For evaluation of sound signals, Praat® software was used, and different data mining algorithms were applied using Weka® software. The selection of attributes improved model accuracy, and the best attribute selection was used by applying Wrapper method, while the best classification algorithms were the k-NN and Naive Bayes. According to the results, it was possible to classify the level of stress in pigs through their vocalization.
Resumo:
In this study, feature selection in classification based problems is highlighted. The role of feature selection methods is to select important features by discarding redundant and irrelevant features in the data set, we investigated this case by using fuzzy entropy measures. We developed fuzzy entropy based feature selection method using Yu's similarity and test this using similarity classifier. As the similarity classifier we used Yu's similarity, we tested our similarity on the real world data set which is dermatological data set. By performing feature selection based on fuzzy entropy measures before classification on our data set the empirical results were very promising, the highest classification accuracy of 98.83% was achieved when testing our similarity measure to the data set. The achieved results were then compared with some other results previously obtained using different similarity classifiers, the obtained results show better accuracy than the one achieved before. The used methods helped to reduce the dimensionality of the used data set, to speed up the computation time of a learning algorithm and therefore have simplified the classification task
Resumo:
This study aimed to assess the degree of similarity presented by thematic maps generated by different sampling grids of weed plants in a commercial agricultural area of 7.95 hectares. Monocotyledons and dicotyledons were counted on the 2012/2013 and 2013/2014 harvests, before soybean planting, in the fallow period after wheat harvest, in both years. A regular grid of 10 x 10 m was produced to sample the invasive plants, used as reference, and the counting was done in 1 m² of each sample point, totaling 795 samples in each year, compared to regular grids of 30 and 50 m, generated from the data exclusion of the standard grid. Twenty-two composite soil samples were taken at a depth of 0-20 cm to correlate soil properties with weeds occurrence. For the generation of the thematic maps, the Inverse Distance Weighting (IDW) for interpolation was used; when comparing the maps generated from each grid with the reference map, the kappa coefficient was used to assess the loss of quality of the maps as the number of sample points was reduced. It was observed that the map quality loss was lower in 2013 compared to 2012 when the sampling density of the points was reduced. The 30 x 30 m grids have satisfactorily described the infestation data of the dicotyledons and the 50 x 50 m grids have adequately described the monocotyledon weeds infestation, compared to the standard 10 x 10 m grids.
Resumo:
Data of corn ear production (kg/ha) of 196 half-sib progenies (HSP) of the maize population CMS-39 obtained from experiments carried out in four environments were used to adapt and assess the BLP method (best linear predictor) in comparison with to the selection among and within half-sib progenies (SAWHSP). The 196 HSP of the CMS-39 population developed by the National Center for Maize and Sorghum Research (CNPMS-EMBRAPA) were related through their pedigree with the recombined progenies of the previous selection cycle. The two methodologies used for the selection of the twenty best half-sib progenies, BLP and SAWHSP, led to similar expected genetic gains. There was a tendency in the BLP methodology to select a greater number of related progenies because of the previous generation (pedigree) than the other method. This implies that greater care with the effective size of the population must be taken with this method. The SAWHSP methodology was efficient in isolating the additive genetic variance component from the phenotypic component. The pedigree system, although unnecessary for the routine use of the SAWHSP methodology, allowed the prediction of an increase in the inbreeding of the population in the long term SAWHSP selection when recombination is simultaneous to creation of new progenies.
Resumo:
Personalized medicine will revolutionize our capabilities to combat disease. Working toward this goal, a fundamental task is the deciphering of geneticvariants that are predictive of complex diseases. Modern studies, in the formof genome-wide association studies (GWAS) have afforded researchers with the opportunity to reveal new genotype-phenotype relationships through the extensive scanning of genetic variants. These studies typically contain over half a million genetic features for thousands of individuals. Examining this with methods other than univariate statistics is a challenging task requiring advanced algorithms that are scalable to the genome-wide level. In the future, next-generation sequencing studies (NGS) will contain an even larger number of common and rare variants. Machine learning-based feature selection algorithms have been shown to have the ability to effectively create predictive models for various genotype-phenotype relationships. This work explores the problem of selecting genetic variant subsets that are the most predictive of complex disease phenotypes through various feature selection methodologies, including filter, wrapper and embedded algorithms. The examined machine learning algorithms were demonstrated to not only be effective at predicting the disease phenotypes, but also doing so efficiently through the use of computational shortcuts. While much of the work was able to be run on high-end desktops, some work was further extended so that it could be implemented on parallel computers helping to assure that they will also scale to the NGS data sets. Further, these studies analyzed the relationships between various feature selection methods and demonstrated the need for careful testing when selecting an algorithm. It was shown that there is no universally optimal algorithm for variant selection in GWAS, but rather methodologies need to be selected based on the desired outcome, such as the number of features to be included in the prediction model. It was also demonstrated that without proper model validation, for example using nested cross-validation, the models can result in overly-optimistic prediction accuracies and decreased generalization ability. It is through the implementation and application of machine learning methods that one can extract predictive genotype–phenotype relationships and biological insights from genetic data sets.