951 resultados para Finite Difference Model
Resumo:
Optimal challenge occurs when an individual perceives the challenge of the task to be equaled or matched by his or her own skill level (Csikszentmihalyi, 1990). The purpose of this study was to test the impact of the OPTIMAL model on physical education students' motivation and perceptions of optimal challenge across four games categories (i. e. target, batting/fielding, net/wall, invasion). Enjoyment, competence, student goal orientation and activity level were examined in relation to the OPTIMAL model. A total of 22 (17 M; 5 F) students and their parents provided informed consent to take part in the study and were taught four OPTIMAL lessons and four non-OPTIMAL lessons ranging across the four different games categories by their own teacher. All students completed the Task and Ego in Sport Questionnaire (TEOSQ; Duda & Whitehead, 1998), the Intrinsic Motivation Inventory (IMI; McAuley, Duncan, & Tanmien, 1987) and the Children's Perception of Optimal Challenge Instrument (CPOCI; Mandigo, 2001). Sixteen students (two each lesson) were observed by using the System for Observing Fitness Instruction Time tool (SOFTT; McKenzie, 2002). As well, they participated in a structured interview which took place after each lesson was completed. Quantitative results concluded that no overall significant difference was found in motivational outcomes when comparing OPTIMAL and non-OPTIMAL lessons. However, when the lessons were broken down into games categories, significant differences emerged. Levels of perceived competence were found to be higher in non-OPTIMAL batting/fielding lessons compared to OPTIMAL lessons, whereas levels of enjoyment and perceived competence were found to be higher in OPTIMAL invasion lessons in comparison to non-OPTIMAL invasion lessons. Qualitative results revealed significance in feehngs of skill/challenge balance, enjoyment and competence in the OPTIMAL lessons. Moreover, a significance of practically twice the active movement time percentage was found in OPTIMAL lessons in comparison to non-OPTIMAL lessons.
Resumo:
This study explores how new university teachers develop a teaching identity. Despite the significance ofteaching, which usually comprises 40% of a Canadian academic's workload, few new professors have any formal preparation for that aspect of their role. Discipline-specific education for postsecondary professors is a well-defined path; graduates applying for faculty positions will have the terminal degree to attest to their knowledge and skill conducting research in the discipline. While teaching is usually given the same workload balance as research, it is not clear how professors create themselves as teaching professionals. Drawing on Kelly's (1955) personal construct theory and Kegan's (1982, 1994) model ofdevelopmental constructivism through differentiation and integration, this study used a phenomenographic framework~(Marton, 1986, 1994; Trigwell & Prosser, 1996) to investigate the question of how new faculty members construe their identity as university teachers. Further, my own role development as researcher was used as an additional lens through which to view the study results. The study focused particularly on the challenges and supports to teaching role development and outlines recommendations the participants made for supporting other newcomers. In addition, the variations and similarities in the results suggest a developmental model to conceptions ofteaching roles, one in which teaching, research, and service roles are viewed as more integrated over time. Developing a teacher identity was seen as a progression on a hierarchical model similar to Maslow's (1968) hierarchy of needs.
Resumo:
years 8 months) and 24 older (M == 7 years 4 months) children. A Monitoring Process Model (MPM) was developed and tested in order to ascertain at which component process ofthe MPM age differences would emerge. The MPM had four components: (1) assessment; (2) evaluation; (3) planning; and (4) behavioural control. The MPM was assessed directly using a referential communication task in which the children were asked to make a series of five Lego buildings (a baseline condition and one building for each MPM component). Children listened to instructions from one experimenter while a second experimenter in the room (a confederate) intetjected varying levels ofverbal feedback in order to assist the children and control the component ofthe MPM. This design allowed us to determine at which "stage" ofprocessing children would most likely have difficulty monitoring themselves in this social-cognitive task. Developmental differences were obselVed for the evaluation, planning and behavioural control components suggesting that older children were able to be more successful with the more explicit metacomponents. Interestingly, however, there was no age difference in terms ofLego task success in the baseline condition suggesting that without the intelVention ofthe confederate younger children monitored the task about as well as older children. This pattern ofresults indicates that the younger children were disrupted by the feedback rather than helped. On the other hand, the older children were able to incorporate the feedback offered by the confederate into a plan ofaction. Another aim ofthis study was to assess similar processing components to those investigated by the MPM Lego task in a more naturalistic observation. Together the use ofthe Lego Task ( a social cognitive task) and the naturalistic social interaction allowed for the appraisal of cross-domain continuities and discontinuities in monitoring behaviours. In this vein, analyses were undertaken in order to ascertain whether or not successful performance in the MPM Lego Task would predict cross-domain competence in the more naturalistic social interchange. Indeed, success in the two latter components ofthe MPM (planning and behavioural control) was related to overall competence in the naturalistic task. However, this cross-domain prediction was not evident for all levels ofthe naturalistic interchange suggesting that the nature ofthe feedback a child receives is an important determinant ofresponse competency. Individual difference measures reflecting the children's general cognitive capacity (Working Memory and Digit Span) and verbal ability (vocabulary) were also taken in an effort to account for more variance in the prediction oftask success. However, these individual difference measures did not serve to enhance the prediction oftask performance in either the Lego Task or the naturalistic task. Similarly, parental responses to questionnaires pertaining to their child's temperament and social experience also failed to increase prediction oftask performance. On-line measures ofthe children's engagement, positive affect and anxiety also failed to predict competence ratings.
Resumo:
Monte Carlo Simulations were carried out using a nearest neighbour ferromagnetic XYmodel, on both 2-D and 3-D quasi-periodic lattices. In the case of 2-D, both the unfrustrated and frustrated XV-model were studied. For the unfrustrated 2-D XV-model, we have examined the magnetization, specific heat, linear susceptibility, helicity modulus and the derivative of the helicity modulus with respect to inverse temperature. The behaviour of all these quatities point to a Kosterlitz-Thouless transition occuring in temperature range Te == (1.0 -1.05) JlkB and with critical exponents that are consistent with previous results (obtained for crystalline lattices) . However, in the frustrated case, analysis of the spin glass susceptibility and EdwardsAnderson order parameter, in addition to the magnetization, specific heat and linear susceptibility, support a spin glass transition. In the case where the 'thin' rhombus is fully frustrated, a freezing transition occurs at Tf == 0.137 JlkB , which contradicts previous work suggesting the critical dimension of spin glasses to be de > 2 . In the 3-D systems, examination of the magnetization, specific heat and linear susceptibility reveal a conventional second order phase transition. Through a cumulant analysis and finite size scaling, a critical temperature of Te == (2.292 ± 0.003) JI kB and critical exponents of 0:' == 0.03 ± 0.03, f3 == 0.30 ± 0.01 and I == 1.31 ± 0.02 have been obtained.
Resumo:
To evaluate the effectiveness of a goal-setting model on behavioural change, thirty nine adults between the ages of23 and 73 years who were in a weight loss program were assigned to one oftwo groups. One group was taught to change eating behaviour using a weight-reducing diet. The other group was taught to use a goal-setting model to change behaviour. Pretest and posttest surveys were completed by all participants, and a callback survey by theexperimentals. The PET Type Check and Kolb's Learning Style Inventory were administered to all participants. As well, five ofthe experimentals were interviewed. Results of qualitative analyses showed no significant difference between the two groups, but qualitative research suggested that experimentals were more likely to use the goal-setting model to make behavioural changes, and that being successful increased their self-efficacy.
Resumo:
In the literature, persistent neural activity over frontal and parietal areas during the delay period of oculomotor delayed response (ODR) tasks has been interpreted as an active representation of task relevant information and response preparation. Following a recent ERP study (Tekok-Kilic, Tays, & Tkach, 2011 ) that reported task related slow wave differences over frontal and parietal sites during the delay periods of three ODR tasks, the present investigation explored developmental differences in young adults and adolescents during the same ODR tasks using 128-channel dense electrode array methodology and source localization. This exploratory study showed that neural functioning underlying visual-spatial WM differed between age groups in the Match condition. More specifically, this difference is localized anteriorly during the late delay period. Given the protracted maturation of the frontal lobes, the observed variation at the frontal site may indicate that adolescents and young adults may recruit frontal-parietal resources differently.
Resumo:
The primary goal was to test a mediated-moderation model in which dispositional optimism was the moderator and its role was mediated by problem-focused coping. A secondary goal was to demonstrate that posttraumatic growth could be differentiated from maturation and normal development. Two groups of participants were recruited and completed questionnaires twice with a 60-day interval: One group (Trauma), described a traumatic experience and the second group (Non-trauma), described a significant experience. Contrary to the hypothesis, only problem-focused coping and deliberate rumination predicted posttraumatic growth, and these findings were only observed in concurrent analyses. Furthermore, the results indicated that there was no significant difference between groups on growth scores at either Time 1 or Time 2. The findings suggest that the term “posttraumatic growth” may refer to the context in which growth occurs rather than to some developmental process that uniquely follows trauma.
Resumo:
In this paper, we develop finite-sample inference procedures for stationary and nonstationary autoregressive (AR) models. The method is based on special properties of Markov processes and a split-sample technique. The results on Markovian processes (intercalary independence and truncation) only require the existence of conditional densities. They are proved for possibly nonstationary and/or non-Gaussian multivariate Markov processes. In the context of a linear regression model with AR(1) errors, we show how these results can be used to simplify the distributional properties of the model by conditioning a subset of the data on the remaining observations. This transformation leads to a new model which has the form of a two-sided autoregression to which standard classical linear regression inference techniques can be applied. We show how to derive tests and confidence sets for the mean and/or autoregressive parameters of the model. We also develop a test on the order of an autoregression. We show that a combination of subsample-based inferences can improve the performance of the procedure. An application to U.S. domestic investment data illustrates the method.
Resumo:
In the literature on tests of normality, much concern has been expressed over the problems associated with residual-based procedures. Indeed, the specialized tables of critical points which are needed to perform the tests have been derived for the location-scale model; hence reliance on available significance points in the context of regression models may cause size distortions. We propose a general solution to the problem of controlling the size normality tests for the disturbances of standard linear regression, which is based on using the technique of Monte Carlo tests.
Resumo:
We propose finite sample tests and confidence sets for models with unobserved and generated regressors as well as various models estimated by instrumental variables methods. The validity of the procedures is unaffected by the presence of identification problems or \"weak instruments\", so no detection of such problems is required. We study two distinct approaches for various models considered by Pagan (1984). The first one is an instrument substitution method which generalizes an approach proposed by Anderson and Rubin (1949) and Fuller (1987) for different (although related) problems, while the second one is based on splitting the sample. The instrument substitution method uses the instruments directly, instead of generated regressors, in order to test hypotheses about the \"structural parameters\" of interest and build confidence sets. The second approach relies on \"generated regressors\", which allows a gain in degrees of freedom, and a sample split technique. For inference about general possibly nonlinear transformations of model parameters, projection techniques are proposed. A distributional theory is obtained under the assumptions of Gaussian errors and strictly exogenous regressors. We show that the various tests and confidence sets proposed are (locally) \"asymptotically valid\" under much weaker assumptions. The properties of the tests proposed are examined in simulation experiments. In general, they outperform the usual asymptotic inference methods in terms of both reliability and power. Finally, the techniques suggested are applied to a model of Tobin’s q and to a model of academic performance.
Resumo:
This paper considers various asymptotic approximations in the near-integrated firstorder autoregressive model with a non-zero initial condition. We first extend the work of Knight and Satchell (1993), who considered the random walk case with a zero initial condition, to derive the expansion of the relevant joint moment generating function in this more general framework. We also consider, as alternative approximations, the stochastic expansion of Phillips (1987c) and the continuous time approximation of Perron (1991). We assess how these alternative methods provide or not an adequate approximation to the finite-sample distribution of the least-squares estimator in a first-order autoregressive model. The results show that, when the initial condition is non-zero, Perron's (1991) continuous time approximation performs very well while the others only offer improvements when the initial condition is zero.
Resumo:
In this paper, we propose several finite-sample specification tests for multivariate linear regressions (MLR) with applications to asset pricing models. We focus on departures from the assumption of i.i.d. errors assumption, at univariate and multivariate levels, with Gaussian and non-Gaussian (including Student t) errors. The univariate tests studied extend existing exact procedures by allowing for unspecified parameters in the error distributions (e.g., the degrees of freedom in the case of the Student t distribution). The multivariate tests are based on properly standardized multivariate residuals to ensure invariance to MLR coefficients and error covariances. We consider tests for serial correlation, tests for multivariate GARCH and sign-type tests against general dependencies and asymmetries. The procedures proposed provide exact versions of those applied in Shanken (1990) which consist in combining univariate specification tests. Specifically, we combine tests across equations using the MC test procedure to avoid Bonferroni-type bounds. Since non-Gaussian based tests are not pivotal, we apply the “maximized MC” (MMC) test method [Dufour (2002)], where the MC p-value for the tested hypothesis (which depends on nuisance parameters) is maximized (with respect to these nuisance parameters) to control the test’s significance level. The tests proposed are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995. Our empirical results reveal the following. Whereas univariate exact tests indicate significant serial correlation, asymmetries and GARCH in some equations, such effects are much less prevalent once error cross-equation covariances are accounted for. In addition, significant departures from the i.i.d. hypothesis are less evident once we allow for non-Gaussian errors.
Resumo:
The technique of Monte Carlo (MC) tests [Dwass (1957), Barnard (1963)] provides an attractive method of building exact tests from statistics whose finite sample distribution is intractable but can be simulated (provided it does not involve nuisance parameters). We extend this method in two ways: first, by allowing for MC tests based on exchangeable possibly discrete test statistics; second, by generalizing the method to statistics whose null distributions involve nuisance parameters (maximized MC tests, MMC). Simplified asymptotically justified versions of the MMC method are also proposed and it is shown that they provide a simple way of improving standard asymptotics and dealing with nonstandard asymptotics (e.g., unit root asymptotics). Parametric bootstrap tests may be interpreted as a simplified version of the MMC method (without the general validity properties of the latter).
Resumo:
Statistical tests in vector autoregressive (VAR) models are typically based on large-sample approximations, involving the use of asymptotic distributions or bootstrap techniques. After documenting that such methods can be very misleading even with fairly large samples, especially when the number of lags or the number of equations is not small, we propose a general simulation-based technique that allows one to control completely the level of tests in parametric VAR models. In particular, we show that maximized Monte Carlo tests [Dufour (2002)] can provide provably exact tests for such models, whether they are stationary or integrated. Applications to order selection and causality testing are considered as special cases. The technique developed is applied to quarterly and monthly VAR models of the U.S. economy, comprising income, money, interest rates and prices, over the period 1965-1996.
Resumo:
Affiliation: Institut de recherche en immunologie et en cancérologie, Université de Montréal