57 resultados para learning with errors
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
The adaptive process in motor learning was examined in terms of effects of varying amounts of constant practice performed before random practice. Participants pressed five response keys sequentially, the last one coincident with the lighting of a final visual stimulus provided by a complex coincident timing apparatus. Different visual stimulus speeds were used during the random practice. 33 children (M age=11.6 yr.) were randomly assigned to one of three experimental groups: constant-random, constant-random 33%, and constant-random 66%. The constant-random group practiced constantly until they reached a criterion of performance stabilization three consecutive trials within 50 msec. of error. The other two groups had additional constant practice of 33 and 66%, respectively, of the number of trials needed to achieve the stabilization criterion. All three groups performed 36 trials under random practice; in the adaptation phase, they practiced at a different visual stimulus speed adopted in the stabilization phase. Global performance measures were absolute, constant, and variable errors, and movement pattern was analyzed by relative timing and overall movement time. There was no group difference in relation to global performance measures and overall movement time. However, differences between the groups were observed on movement pattern, since constant-random 66% group changed its relative timing performance in the adaptation phase.
Resumo:
Two case studies are presented to describe the process of public school teachers authoring and creating chemistry simulations. They are part of the Virtual Didactic Laboratory for Chemistry, a project developed by the School of the Future of the University of Sao Paulo. the documental analysis of the material produced by two groups of teachers reflects different selection process for both themes and problem-situations when creating simulations. The study demonstrates the potential for chemistry learning with an approach that takes students' everyday lives into account and is based on collaborative work among teachers and researches. Also, from the teachers' perspectives, the possibilities of interaction that a simulation offers for classroom activities are considered.
Resumo:
The Learning Object (OA) is any digital resource that can be reused to support learning with specific functions and objectives. The OA specifications are commonly offered in SCORM model without considering activities in groups. This deficiency was overcome by the solution presented in this paper. This work specified OA for e-learning activities in groups based on SCORM model. This solution allows the creation of dynamic objects which include content and software resources for the collaborative learning processes. That results in a generalization of the OA definition, and in a contribution with e-learning specifications.
Resumo:
Background: Food portion size estimation involves a complex mental process that may influence food consumption evaluation. Knowing the variables that influence this process can improve the accuracy of dietary assessment. The present study aimed to evaluate the ability of nutrition students to estimate food portions in usual meals and relate food energy content with errors in food portion size estimation. Methods: Seventy-eight nutrition students, who had already studied food energy content, participated in this cross-sectional study on the estimation of food portions, organised into four meals. The participants estimated the quantity of each food, in grams or millilitres, with the food in view. Estimation errors were quantified, and their magnitude were evaluated. Estimated quantities (EQ) lower than 90% and higher than 110% of the weighed quantity (WQ) were considered to represent underestimation and overestimation, respectively. Correlation between food energy content and error on estimation was analysed by the Spearman correlation, and comparison between the mean EQ and WQ was accomplished by means of the Wilcoxon signed rank test (P < 0.05). Results: A low percentage of estimates (18.5%) were considered accurate (+/- 10% of the actual weight). The most frequently underestimated food items were cauliflower, lettuce, apple and papaya; the most often overestimated items were milk, margarine and sugar. A significant positive correlation between food energy density and estimation was found (r = 0.8166; P = 0.0002). Conclusions: The results obtained in the present study revealed a low percentage of acceptable estimations of food portion size by nutrition students, with trends toward overestimation of high-energy food items and underestimation of low-energy items.
Resumo:
Teaching and learning with history and philosophy of science (HPS) has been, and continues to be, supported by science educators. While science education standards documents in many countries also stress the importance of teaching and learning with HPS, the approach still suffers from ineffective implementation in school science teaching. In order to better understand this problem, an analysis of the obstacles of implementing HPS into classrooms was undertaken. The obstacles taken into account were structured in four groups: 1. culture of teaching physics, 2. teachers` skills, epistemological and didactical attitudes and beliefs, 3. institutional framework of science teaching, and 4. textbooks as fundamental didactical support. Implications for more effective implementation of HPS are presented, taking the social nature of educational systems into account.
Resumo:
The class of symmetric linear regression models has the normal linear regression model as a special case and includes several models that assume that the errors follow a symmetric distribution with longer-than-normal tails. An important member of this class is the t linear regression model, which is commonly used as an alternative to the usual normal regression model when the data contain extreme or outlying observations. In this article, we develop second-order asymptotic theory for score tests in this class of models. We obtain Bartlett-corrected score statistics for testing hypotheses on the regression and the dispersion parameters. The corrected statistics have chi-squared distributions with errors of order O(n(-3/2)), n being the sample size. The corrections represent an improvement over the corresponding original Rao`s score statistics, which are chi-squared distributed up to errors of order O(n(-1)). Simulation results show that the corrected score tests perform much better than their uncorrected counterparts in samples of small or moderate size.
Resumo:
Composite electrodes were prepared using graphite powder and silicone rubber in different compositions. The use of such hydrophopic materials interned to diminish the swallowing observed in other cases when the electrodes are used in aqueous solutions for a long time. The composite was characterized for the response reproducibility, ohmic resistance, thermal behavior and active area. The voltammetric response in relation to analytes with known voltammetric behavior was also evaluated, always in comparison with the glassy carbon. The 70% (graphite, w/w) composite electrode was used in the quantitative determination of hydroquinone (HQ) in a DPV procedure in which a detection limit of 5.1 x 10(-8) mol L-1 was observed. HQ was determined in a photographic developer sample with errors lower then 1% in relation to the label value. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
The Neonatal Screening for Inborn Errors of Metabolism of the Association of Parents and Friends of Special Needs Individuals (APAE) - Bauru, Brazil, was implanted and accredited by the Brazilian Ministry of Health in 1998. It covers about 286 cities of the Bauru region and 420 collection spots. Their activities include screening, diagnosis, treatment and assistance to congenital hypothyroidism (CH) and phenylketonuria (PKU), among others. In 2005, a partnership was established with the Department of Speech-Language Pathology and Audiology, Bauru School of Dentistry, University of São Paulo, Bauru, seeking to characterize and to follow, by means of research studies, the development of the communicative abilities of children with CH and PKU. OBJECTIVE: The aim of this study was to describe communicative and psycholinguistic abilities in children with CH and PKU. MATERIALS AND METHODS: Sixty-eight children (25 children aged 1 to 120 months with PKU and 43 children aged 1 to 60 months with CH) participated in the study. The handbooks were analyzed and different instruments were applied (Observation of Communication Behavior, Early Language Milestone Scale, Peabody Picture Vocabulary Test, Gesell & Amatruda's Behavioral Development Scale, Portage Operation Inventory, Language Development Evaluation Scale, Denver Developmental Screening Test, ABFW Child Language Test-phonology and Illinois Test of Psycholinguistic Abilities), according to the children's age group and developmental level. RESULTS: It was observed that the children with PKU and CH at risk for alterations in their developmental abilities (motor, cognitive, linguistic, adaptive and personal-social), mainly in the first years of life. Alterations in the psycholinguistic abilities were also found, mainly after the preschool age. Attention deficits, language and cognitive alterations were more often observed in children with CH, while attention deficits with hyperactivity and alterations in the personal-social, language and motor adaptive abilities were more frequent in children with PKU. CONCLUSION: CH and PKU can cause communicative and psycholinguistic alterations that compromise the communication and affect the social integration and learning of these individuals, proving the need of having these abilities assisted by a speech and language pathologist.
Resumo:
The procedure for online process control by attributes consists of inspecting a single item at every m produced items. It is decided on the basis of the inspection result whether the process is in-control (the conforming fraction is stable) or out-of-control (the conforming fraction is decreased, for example). Most articles about online process control have cited the stoppage of the production process for an adjustment when the inspected item is non-conforming (then the production is restarted in-control, here denominated as corrective adjustment). Moreover, the articles related to this subject do not present semi-economical designs (which may yield high quantities of non-conforming items), as they do not include a policy of preventive adjustments (in such case no item is inspected), which can be more economical, mainly if the inspected item can be misclassified. In this article, the possibility of preventive or corrective adjustments in the process is decided at every m produced item. If a preventive adjustment is decided upon, then no item is inspected. On the contrary, the m-th item is inspected; if it conforms, the production goes on, otherwise, an adjustment takes place and the process restarts in-control. This approach is economically feasible for some practical situations and the parameters of the proposed procedure are determined minimizing an average cost function subject to some statistical restrictions (for example, to assure a minimal levelfixed in advanceof conforming items in the production process). Numerical examples illustrate the proposal.
Resumo:
Higher order (2,4) FDTD schemes used for numerical solutions of Maxwell`s equations are focused on diminishing the truncation errors caused by the Taylor series expansion of the spatial derivatives. These schemes use a larger computational stencil, which generally makes use of the two constant coefficients, C-1 and C-2, for the four-point central-difference operators. In this paper we propose a novel way to diminish these truncation errors, in order to obtain more accurate numerical solutions of Maxwell`s equations. For such purpose, we present a method to individually optimize the pair of coefficients, C-1 and C-2, based on any desired grid size resolution and size of time step. Particularly, we are interested in using coarser grid discretizations to be able to simulate electrically large domains. The results of our optimization algorithm show a significant reduction in dispersion error and numerical anisotropy for all modeled grid size resolutions. Numerical simulations of free-space propagation verifies the very promising theoretical results. The model is also shown to perform well in more complex, realistic scenarios.
Resumo:
There is not a specific test to diagnose Alzheimer`s disease (AD). Its diagnosis should be based upon clinical history, neuropsychological and laboratory tests, neuroimaging and electroencephalography (EEG). Therefore, new approaches are necessary to enable earlier and more accurate diagnosis and to follow treatment results. In this study we used a Machine Learning (ML) technique, named Support Vector Machine (SVM), to search patterns in EEG epochs to differentiate AD patients from controls. As a result, we developed a quantitative EEG (qEEG) processing method for automatic differentiation of patients with AD from normal individuals, as a complement to the diagnosis of probable dementia. We studied EEGs from 19 normal subjects (14 females/5 males, mean age 71.6 years) and 16 probable mild to moderate symptoms AD patients (14 females/2 males, mean age 73.4 years. The results obtained from analysis of EEG epochs were accuracy 79.9% and sensitivity 83.2%. The analysis considering the diagnosis of each individual patient reached 87.0% accuracy and 91.7% sensitivity.
Resumo:
This four-experiment series sought to evaluate the potential of children with neurosensory deafness and cochlear implants to exhibit auditory-visual and visual-visual stimulus equivalence relations within a matching-to-sample format. Twelve children who became deaf prior to acquiring language (prelingual) and four who became deaf afterwards (postlingual) were studied. All children learned auditory-visual conditional discriminations and nearly all showed emergent equivalence relations. Naming tests, conducted with a subset of the: children, showed no consistent relationship to the equivalence-test outcomes.. This study makes several contributions: to the literature on stimulus equivalence. First; it demonstrates that both pre- and postlingually deaf children-can: acquire auditory-visual equivalence-relations after cochlear implantation, thus demonstrating symbolic functioning. Second, it directs attention to a population that may be especially interesting for researchers seeking to analyze the relationship. between speaker and listener repertoires. Third, it demonstrates the feasibility of conducting experimental studies of stimulus control processes within the limitations of a hospital, which these children must visit routinely for the maintenance of their cochlear implants.
Resumo:
In this paper we have discussed inference aspects of the skew-normal nonlinear regression models following both, a classical and Bayesian approach, extending the usual normal nonlinear regression models. The univariate skew-normal distribution that will be used in this work was introduced by Sahu et al. (Can J Stat 29:129-150, 2003), which is attractive because estimation of the skewness parameter does not present the same degree of difficulty as in the case with Azzalini (Scand J Stat 12:171-178, 1985) one and, moreover, it allows easy implementation of the EM-algorithm. As illustration of the proposed methodology, we consider a data set previously analyzed in the literature under normality.
Resumo:
This paper deals with asymptotic results on a multivariate ultrastructural errors-in-variables regression model with equation errors Sufficient conditions for attaining consistent estimators for model parameters are presented Asymptotic distributions for the line regression estimators are derived Applications to the elliptical class of distributions with two error assumptions are presented The model generalizes previous results aimed at univariate scenarios (C) 2010 Elsevier Inc All rights reserved
Resumo:
We have considered a Bayesian approach for the nonlinear regression model by replacing the normal distribution on the error term by some skewed distributions, which account for both skewness and heavy tails or skewness alone. The type of data considered in this paper concerns repeated measurements taken in time on a set of individuals. Such multiple observations on the same individual generally produce serially correlated outcomes. Thus, additionally, our model does allow for a correlation between observations made from the same individual. We have illustrated the procedure using a data set to study the growth curves of a clinic measurement of a group of pregnant women from an obstetrics clinic in Santiago, Chile. Parameter estimation and prediction were carried out using appropriate posterior simulation schemes based in Markov Chain Monte Carlo methods. Besides the deviance information criterion (DIC) and the conditional predictive ordinate (CPO), we suggest the use of proper scoring rules based on the posterior predictive distribution for comparing models. For our data set, all these criteria chose the skew-t model as the best model for the errors. These DIC and CPO criteria are also validated, for the model proposed here, through a simulation study. As a conclusion of this study, the DIC criterion is not trustful for this kind of complex model.