104 resultados para Learning methods
Resumo:
We propose and analyze two different Bayesian online algorithms for learning in discrete Hidden Markov Models and compare their performance with the already known Baldi-Chauvin Algorithm. Using the Kullback-Leibler divergence as a measure of generalization we draw learning curves in simplified situations for these algorithms and compare their performances.
Resumo:
Thanks to recent advances in molecular biology, allied to an ever increasing amount of experimental data, the functional state of thousands of genes can now be extracted simultaneously by using methods such as cDNA microarrays and RNA-Seq. Particularly important related investigations are the modeling and identification of gene regulatory networks from expression data sets. Such a knowledge is fundamental for many applications, such as disease treatment, therapeutic intervention strategies and drugs design, as well as for planning high-throughput new experiments. Methods have been developed for gene networks modeling and identification from expression profiles. However, an important open problem regards how to validate such approaches and its results. This work presents an objective approach for validation of gene network modeling and identification which comprises the following three main aspects: (1) Artificial Gene Networks (AGNs) model generation through theoretical models of complex networks, which is used to simulate temporal expression data; (2) a computational method for gene network identification from the simulated data, which is founded on a feature selection approach where a target gene is fixed and the expression profile is observed for all other genes in order to identify a relevant subset of predictors; and (3) validation of the identified AGN-based network through comparison with the original network. The proposed framework allows several types of AGNs to be generated and used in order to simulate temporal expression data. The results of the network identification method can then be compared to the original network in order to estimate its properties and accuracy. Some of the most important theoretical models of complex networks have been assessed: the uniformly-random Erdos-Renyi (ER), the small-world Watts-Strogatz (WS), the scale-free Barabasi-Albert (BA), and geographical networks (GG). The experimental results indicate that the inference method was sensitive to average degree k variation, decreasing its network recovery rate with the increase of k. The signal size was important for the inference method to get better accuracy in the network identification rate, presenting very good results with small expression profiles. However, the adopted inference method was not sensible to recognize distinct structures of interaction among genes, presenting a similar behavior when applied to different network topologies. In summary, the proposed framework, though simple, was adequate for the validation of the inferred networks by identifying some properties of the evaluated method, which can be extended to other inference methods.
Resumo:
Background: Mutations in TP53 are common events during carcinogenesis. In addition to gene mutations, several reports have focused on TP53 polymorphisms as risk factors for malignant disease. Many studies have highlighted that the status of the TP53 codon 72 polymorphism could influence cancer susceptibility. However, the results have been inconsistent and various methodological features can contribute to departures from Hardy-Weinberg equilibrium, a condition that may influence the disease risk estimates. The most widely accepted method of detecting genotyping error is to confirm genotypes by sequencing and/or via a separate method. Results: We developed two new genotyping methods for TP53 codon 72 polymorphism detection: Denaturing High Performance Liquid Chromatography (DHPLC) and Dot Blot hybridization. These methods were compared with Restriction Fragment Length Polymorphism (RFLP) using two different restriction enzymes. We observed high agreement among all methodologies assayed. Dot-blot hybridization and DHPLC results were more highly concordant with each other than when either of these methods was compared with RFLP. Conclusions: Although variations may occur, our results indicate that DHPLC and Dot Blot hybridization can be used as reliable screening methods for TP53 codon 72 polymorphism detection, especially in molecular epidemiologic studies, where high throughput methodologies are required.
Resumo:
The aim of this Study was to compare the learning process of a highly complex ballet skill following demonstrations of point light and video models 16 participants divided into point light and video groups (ns = 8) performed 160 trials of a pirouette equally distributed in blocks of 20 trials alternating periods of demonstration and practice with a retention test a day later Measures of head and trunk oscillation coordination d1 parity from the model and movement time difference showed similarities between video and point light groups ballet experts evaluations indicated superiority of performance in the video over the point light group Results are discussed in terms of the task requirements of dissociation between head and trunk rotations focusing on the hypothesis of sufficiency and higher relevance of information contained in biological motion models applied to learning of complex motor skills
Resumo:
It has been demonstrated that laser induced breakdown spectrometry (LIBS) can be used as an alternative method for the determination of macro (P, K. Ca, Mg) and micronutrients (B, Fe, Cu, Mn, Zn) in pellets of plant materials. However, information is required regarding the sample preparation for plant analysis by LIBS. In this work, methods involving cryogenic grinding and planetary ball milling were evaluated for leaves comminution before pellets preparation. The particle sizes were associated to chemical sample properties such as fiber and cellulose contents, as well as to pellets porosity and density. The pellets were ablated at 30 different sites by applying 25 laser pulses per site (Nd:YAG@1064 nm, 5 ns, 10 Hz, 25J cm(-2)). The plasma emission collected by lenses was directed through an optical fiber towards a high resolution echelle spectrometer equipped with an ICCD. Delay time and integration time gate were fixed at 2.0 and 4.5 mu s, respectively. Experiments carried out with pellets of sugarcane, orange tree and soy leaves showed a significant effect of the plant species for choosing the most appropriate grinding conditions. By using ball milling with agate materials, 20 min grinding for orange tree and soy, and 60 min for sugarcane leaves led to particle size distributions generally lower than 75 mu m. Cryogenic grinding yielded similar particle size distributions after 10 min for orange tree, 20 min for soy and 30 min for sugarcane leaves. There was up to 50% emission signal enhancement on LIBS measurements for most elements by improving particle size distribution and consequently the pellet porosity. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The aim of this paper is to highlight some of the methods of imagetic information representation, reviewing the literature of the area and proposing a model of methodology adapted to Brazilian museums. An elaboration of a methodology of imagetic information representation is developed based on Brazilian characteristics of information treatment in order to adapt it to museums. Finally, spreadsheets that show this methodology are presented.
Resumo:
ARTIOLI, G. G., B. GUALANO, E. FRANCHINI, F. B. SCAGLIUSI, M. TAKESIAN, M. FUCHS, and A. H. LANCHA. Prevalence, Magnitude, and Methods of Rapid Weight Loss among Judo Competitors. Med. Sci. Sports Exerc., Vol. 42, No. 3, pp. 436-442, 2010. Purpose: To identify the prevalence, magnitude, and methods of rapid weight loss among judo competitors. Methods: Athletes (607 males and 215 females; age = 19.3 +/- 5.3 yr, weight = 70 +/- 7.5 kg, height = 170.6 +/- 9.8 cm) completed a previously validated questionnaire developed to evaluate rapid weight loss in judo athletes, which provides a score. The higher the score obtained, the more aggressive the weight loss behaviors. Data were analyzed using descriptive statistics and frequency analyses. Mean scores obtained in the questionnaire were used to compare specific groups of athletes using, when appropriate, Mann-Whitney U-test or general linear model one-way ANOVA followed by Tamhane post hoc test. Results: Eighty-six percent of athletes reported that have already lost weight to compete. When heavyweights are excluded, this percentage rises to 89%. Most athletes reported reductions of up to 5% of body weight (mean +/- SD: 2.5 +/- 2.3%). The most weight ever lost was 2%-5%, whereas a great part of athletes reported reductions of 5%-10% (mean +/- SD: 6 +/- 4%). The number of reductions underwent in a season was 3 +/- 5. The reductions usually occurred within 7 +/- 7 d. Athletes began cutting weight at 12.6 +/- 6.1 yr. No significant differences were found in the score obtained by male versus female athletes as well as by athletes from different weight classes. Elite athletes scored significantly higher in the questionnaire than nonelite. Athletes who began cutting weight earlier also scored higher than those who began later. Conclusions: Rapid weight loss is highly prevalent in judo competitors. The level of aggressiveness in weight management behaviors seems to not be influenced by the gender or by the weight class, but it seems to be influenced by competitive level and by the age at which athletes began cutting weight.
Resumo:
The aim of the present study was to compare and correlate training impulse (TRIMP) estimates proposed by Banister (TRIMP(Banister)), Stagno (TRIMP(Stagno)) and Manzi (TRIMP(Manzi)). The subjects were submitted to an incremental test on cycle ergometer with heart rate and blood lactate concentration measurements. In the second occasion, they performed 30 min. of exercise at the intensity corresponding to maximal lactate steady state, and TRIMP(Banister), TRIMP(Stagno) and TRIMP(Manzi) were calculated. The mean values of TRIMP(Banister) (56.5 +/- 8.2 u.a.) and TRIMP(Stagno) (51.2 +/- 12.4 u.a.) were not different (P > 0.05) and were highly correlated (r = 0.90). Besides this, they presented a good agreement level, which means low bias and relatively narrow limits of agreement. On the other hand, despite highly correlated (r = 0.93), TRIMP(Stagno) and TRIMP(Manzi) (73.4 +/- 17.6 u.a.) were different (P < 0.05), with low agreement level. The TRIMP(Banister) e TRIMP(Manzi) estimates were not different (P = 0.06) and were highly correlated (r = 0.82), but showed low agreement level. Thus, we concluded that the investigated TRIMP methods are not equivalent. In practical terms, it seems prudent monitor the training process assuming only one of the estimates.
Resumo:
PIBIC-CNPq-Conselho Nacional de Desenvolvimento Cientifico e Technologico
Resumo:
The adaptive process in motor learning was examined in terms of effects of varying amounts of constant practice performed before random practice. Participants pressed five response keys sequentially, the last one coincident with the lighting of a final visual stimulus provided by a complex coincident timing apparatus. Different visual stimulus speeds were used during the random practice. 33 children (M age=11.6 yr.) were randomly assigned to one of three experimental groups: constant-random, constant-random 33%, and constant-random 66%. The constant-random group practiced constantly until they reached a criterion of performance stabilization three consecutive trials within 50 msec. of error. The other two groups had additional constant practice of 33 and 66%, respectively, of the number of trials needed to achieve the stabilization criterion. All three groups performed 36 trials under random practice; in the adaptation phase, they practiced at a different visual stimulus speed adopted in the stabilization phase. Global performance measures were absolute, constant, and variable errors, and movement pattern was analyzed by relative timing and overall movement time. There was no group difference in relation to global performance measures and overall movement time. However, differences between the groups were observed on movement pattern, since constant-random 66% group changed its relative timing performance in the adaptation phase.
Resumo:
An experiment was conducted to investigate the persistence of the effect of ""bandwidth knowledge of results (KR)"" manipulated during the learning phase of performing a manual force-control task. The experiment consisted of two phases, an acquisition phase with the goal of maintaining 60% maximum force in 30 trials, and a second phase with the objective of maintaining 40% of maximum force in 20 further trials. There were four bandwidths of KR: when performance error exceeded 5, 10, or 15% of the target, and a control group (0% bandwidth). Analysis showed that 5, 10, and 15% bandwidth led to better performance than 0% bandwidth KR at the beginning of the second phase and persisted during the extended trials.
Resumo:
Molybdenum and tungsten bimetallic oxides were synthetized according to the following methods: Pechini, coprecipitation and solid state reaction (SSR). After the characterization, those solids were carbureted at programmed temperature. The carburation process was monitored by checking the consumption of carburant hydrocarbon and CO produced. The monitoring process permits to avoid or to diminish the formation of pirolytic carbon.
Resumo:
Motivation: Understanding the patterns of association between polymorphisms at different loci in a population ( linkage disequilibrium, LD) is of fundamental importance in various genetic studies. Many coefficients were proposed for measuring the degree of LD, but they provide only a static view of the current LD structure. Generative models (GMs) were proposed to go beyond these measures, giving not only a description of the actual LD structure but also a tool to help understanding the process that generated such structure. GMs based in coalescent theory have been the most appealing because they link LD to evolutionary factors. Nevertheless, the inference and parameter estimation of such models is still computationally challenging. Results: We present a more practical method to build GM that describe LD. The method is based on learning weighted Bayesian network structures from haplotype data, extracting equivalence structure classes and using them to model LD. The results obtained in public data from the HapMap database showed that the method is a promising tool for modeling LD. The associations represented by the learned models are correlated with the traditional measure of LD D`. The method was able to represent LD blocks found by standard tools. The granularity of the association blocks and the readability of the models can be controlled in the method. The results suggest that the causality information gained by our method can be useful to tell about the conservability of the genetic markers and to guide the selection of subset of representative markers.
Resumo:
Understanding the product`s `end-of-life` is important to reduce the environmental impact of the products` final disposal. When the initial stages of product development consider end-of-life aspects, which can be established by ecodesign (a proactive approach of environmental management that aims to reduce the total environmental impact of products), it becomes easier to close the loop of materials. The `end-of-life` ecodesign methods generally include more than one `end-of-life` strategy. Since product complexity varies substantially, some components, systems or sub-systems are easier to be recycled, reused or remanufactured than others. Remanufacture is an effective way to maintain products in a closed-loop, reducing both environmental impacts and costs of the manufacturing processes. This paper presents some ecodesign methods focused on the integration of different `end-of-life` strategies, with special attention to remanufacturing, given its increasing importance in the international scenario to reduce the life cycle impacts of products. (C) 2009 Elsevier Ltd. All rights reserved.