845 resultados para declarative, procedural, and reflective (DPR) model
Resumo:
The use of the multiple indicators, multiple causes model to operationalize formative variables (the formative MIMIC model) is advocated in the methodological literature. Yet, contrary to popular belief, the formative MIMIC model does not provide a valid method of integrating formative variables into empirical studies and we recommend discarding it from formative models. Our arguments rest on the following observations. First, much formative variable literature appears to conceptualize a causal structure between the formative variable and its indicators which can be tested or estimated. We demonstrate that this assumption is illogical, that a formative variable is simply a researcher-defined composite of sub-dimensions, and that such tests and estimates are unnecessary. Second, despite this, researchers often use the formative MIMIC model as a means to include formative variables in their models and to estimate the magnitude of linkages between formative variables and their indicators. However, the formative MIMIC model cannot provide this information since it is simply a model in which a common factor is predicted by some exogenous variables—the model does not integrate within it a formative variable. Empirical results from such studies need reassessing, since their interpretation may lead to inaccurate theoretical insights and the development of untested recommendations to managers. Finally, the use of the formative MIMIC model can foster fuzzy conceptualizations of variables, particularly since it can erroneously encourage the view that a single focal variable is measured with formative and reflective indicators. We explain these interlinked arguments in more detail and provide a set of recommendations for researchers to consider when dealing with formative variables.
The development, application, and implications of a strategy for reflective learning from experience
Resumo:
The problem on which this study focused was individuals' reduced capacity to respond to change and to engage in innovative learning when their reflective learning skills are limited. In this study, the preceding problem was addressed by two primary questions: To what degree can mastery of a strategy for reflective learning be facilitated as a part of an academic curriculum for professional practitioners? What impact will mastery of this strategy have on the learning style and adaptive flexibility of adult learners? The focus of the study was a direct application of human resource development technology in the professional preparation of teachers. The background of the problem in light of changing global paradigms and educational action orientations was outlined and a review of the literature was provided. Roots of thought for two key concepts (i.e., learning to learn from experience and meaningful reflection in learning) were traced. Reflective perspectives from the work of eight researchers were compared. A meta-model of learning from experience drawn from the literature served as a conceptual framework for the study. A strategy for reflective learning developed from this meta-model was taught to 109 teachers-in-training at Florida International University in Miami, Florida. Kolb's Adaptive Style Inventory and Learning Style Inventory were administered to the treatment group and to two control groups taught by the same professor. Three research questions and fourteen hypotheses guided data analysis. Qualitative review of 1565 personal documents generated by the treatment group indicated that 77 students demonstrated "double-loop" learning, going beyond previously established limits to perception, understanding, or action. The mean score for depth of reflection indicated "single-loop" learning with "reflection-in-action" present. The change in the mean score for depth of reflection from the beginning to end of the study was statistically significant (p $<$.05). On quantitative measures of adaptive flexibility and learning style, with two exceptions, there were no significant differences noted between treatment and control groups on pre-test to post-test differences and on post-test mean scores adjusted for pre-test responses and demographic variables. Conclusions were drawn regarding treatment, instrumentation, and application of the strategy and the meta-model. Implications of the strategy and the meta-model for research, for education, for human resource development, for professional practice, and for personal growth were suggested. Qualitative training materials and Kolb's instruments were provided in the appendices.
Resumo:
A generalized version of the nonequilibrium linear Glauber model with q states in d dimensions is introduced and analyzed. The model is fully symmetric, its dynamics being invariant under all permutations of the q states. Exact expressions for the two-time autocorrelation and response functions on a d-dimensional lattice are obtained. In the stationary regime, the fluctuation-dissipation theorem holds, while in the transient the aging is observed with the fluctuation-dissipation ratio leading to the value predicted for the linear Glauber model.
Resumo:
The conventional convection-dispersion model is widely used to interrelate hepatic availability (F) and clearance (Cl) with the morphology and physiology of the liver and to predict effects such as changes in liver blood flow on F and Cl. The extension of this model to include nonlinear kinetics and zonal heterogeneity of the liver is not straightforward and requires numerical solution of partial differential equation, which is not available in standard nonlinear regression analysis software. In this paper, we describe an alternative compartmental model representation of hepatic disposition (including elimination). The model allows the use of standard software for data analysis and accurately describes the outflow concentration-time profile for a vascular marker after bolus injection into the liver. In an evaluation of a number of different compartmental models, the most accurate model required eight vascular compartments, two of them with back mixing. In addition, the model includes two adjacent secondary vascular compartments to describe the tail section of the concentration-time profile for a reference marker. The model has the added flexibility of being easy to modify to model various enzyme distributions and nonlinear elimination. Model predictions of F, MTT, CV2, and concentration-time profile as well as parameter estimates for experimental data of an eliminated solute (palmitate) are comparable to those for the extended convection-dispersion model.
Resumo:
Aluminium (At) tolerance in plants may be conferred by reduced binding of Al in the cell wall through low root cation exchange capacity (CEC) or by organic acid exudation. Root CEC is related to the degree of esterification (DE) of pectin in the cell wall, and pectin hydrolysis plays a role in cell expansion. Therefore, it was hypothesised that Al-tolerant plants with a low root CEC maintain pectin hydrolysis in the presence of Al, allowing cell expansion to continue. Irrespective of the DE, binding of Al to pectin reduced the enzymatic hydrolysis of Al-pectin gels by polygalacturonase (E.C. 3.2.1.15). Pectin gels with calcium (Ca) were slightly hydrolysed by polygalacturonase. It was concluded, therefore, that Al tolerance conferred by low root CEC is not mediated by the ability to maintain pectin hydrolysis. Citrate and malate, but not acetate, effectively dissolved Al-pectate gel and led to hydrolysis of the dissolved pectin by polygalacturonase. The organic acids did not dissolve Ca-pectate, nor did they increase pectin hydrolysis by polygalacturonase. It was concluded that exudation of some organic acids can remove Al bound to pectin and this could alleviate toxicity, constituting a tolerance mechanism. (C) 2003 Editions scientitiques et medicales Elsevier SAS. All rights reserved.
Resumo:
The historic center of the Portuguese city of Guimarães is a world heritage site (UNESCO) since 2001, having hosted the European Capital of Culture (ECOC) in 2012. In this sense, Guimarães has made a major effort in promoting tourism, positioning itself as an urban and cultural tourism destination. The present paper has two objectives. The first, to examine if an existing push and pull motivation model finds statistical support with regard to the population of the municipality of Guimarães, a cultural tourism destination. The second, to study the role that important socio-demographic variables, such as gender, age, and education, play in determining travel motivations of residents from this municipality. Insight on tourism motivation may be an important policy tool for tourism planners and managers in the development of products and marketing strategies. The empirical analysis is undertaken based on questionnaires administered in 2012 to residents of Guimarães. The present study shows that gender, age and education make a difference with regard to travel motivations.
Resumo:
The development of model observers for mimicking human detection strategies has followed from symmetric signals in simple noise to increasingly complex backgrounds. In this study we implement different model observers for the complex task of detecting a signal in a 3D image stack. The backgrounds come from real breast tomosynthesis acquisitions and the signals were simulated and reconstructed within the volume. Two different tasks relevant to the early detection of breast cancer were considered: detecting an 8 mm mass and detecting a cluster of microcalcifications. The model observers were calculated using a channelized Hotelling observer (CHO) with dense difference-of-Gaussian channels, and a modified (Partial prewhitening [PPW]) observer which was adapted to realistic signals which are not circularly symmetric. The sustained temporal sensitivity function was used to filter the images before applying the spatial templates. For a frame rate of five frames per second, the only CHO that we calculated performed worse than the humans in a 4-AFC experiment. The other observers were variations of PPW and outperformed human observers in every single case. This initial frame rate was a rather low speed and the temporal filtering did not affect the results compared to a data set with no human temporal effects taken into account. We subsequently investigated two higher speeds at 5, 15 and 30 frames per second. We observed that for large masses, the two types of model observers investigated outperformed the human observers and would be suitable with the appropriate addition of internal noise. However, for microcalcifications both only the PPW observer consistently outperformed the humans. The study demonstrated the possibility of using a model observer which takes into account the temporal effects of scrolling through an image stack while being able to effectively detect a range of mass sizes and distributions.
Resumo:
The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by processbased modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM). The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under www.flow-r.org) and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws.We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10m DEM resolution as a good compromise between processing time and quality of results. However, valuable results have still been obtained on the basis of lower quality DEMs with 25m resolution.
Resumo:
A new parameter is introduced: the lightning potential index (LPI), which is a measure of the potential for charge generation and separation that leads to lightning flashes in convective thunderstorms. The LPI is calculated within the charge separation region of clouds between 0 C and 20 C, where the noninductive mechanism involving collisions of ice and graupel particles in the presence of supercooled water is most effective. As shown in several case studies using the Weather Research and Forecasting (WRF) model with explicit microphysics, the LPI is highly correlated with observed lightning. It is suggested that the LPI may be a useful parameter for predicting lightning as well as a tool for improving weather forecasting of convective storms and heavy rainfall.
Resumo:
The Baby and the Couple provides an insider's view on how infant communication develops in the context of the family and how parents either work together as a team or struggle in the process. The authors present vignettes from everyday life as well as case studies from a longitudinal research project of infants and their parents interacting together in the Lausanne Trilogue Play (LTP), an assessment tool for very young families. Divided into three parts, the book focuses not only on the parents, but also on the infant's contribution to the family. Part 1 presents a case study of Lucas and his family, from infancy to age 5. With each chapter we see how, in the context of their families, infants learn to communicate with more than one person at a time. Part 2 explores how infants cope when their parents struggle to work together - excluding, competing or only connecting through their child. The authors follow several case examples from infancy through to early childhood to illustrate various forms of problematic co-parenting, along with the infant's derailed trajectory at different ages and stages. In Part 3, prevention and intervention models based on the LTP are presented. In addition to an overview of these programs, chapters are devoted to the Developmental Systems Consultation, which combines use of the LTP and video feedback, and a new model, Reflective Family Play, which allows whole families to engage in treatment. The Baby and the Couple is a vital resource for professionals working in the fields of infant and preschool mental health including psychiatrists, psychologists, social workers, family therapists and educators, as well as researchers.
Resumo:
RATIONALE: An objective and simple prognostic model for patients with pulmonary embolism could be helpful in guiding initial intensity of treatment. OBJECTIVES: To develop a clinical prediction rule that accurately classifies patients with pulmonary embolism into categories of increasing risk of mortality and other adverse medical outcomes. METHODS: We randomly allocated 15,531 inpatient discharges with pulmonary embolism from 186 Pennsylvania hospitals to derivation (67%) and internal validation (33%) samples. We derived our prediction rule using logistic regression with 30-day mortality as the primary outcome, and patient demographic and clinical data routinely available at presentation as potential predictor variables. We externally validated the rule in 221 inpatients with pulmonary embolism from Switzerland and France. MEASUREMENTS: We compared mortality and nonfatal adverse medical outcomes across the derivation and two validation samples. MAIN RESULTS: The prediction rule is based on 11 simple patient characteristics that were independently associated with mortality and stratifies patients with pulmonary embolism into five severity classes, with 30-day mortality rates of 0-1.6% in class I, 1.7-3.5% in class II, 3.2-7.1% in class III, 4.0-11.4% in class IV, and 10.0-24.5% in class V across the derivation and validation samples. Inpatient death and nonfatal complications were <or= 1.1% among patients in class I and <or= 1.9% among patients in class II. CONCLUSIONS: Our rule accurately classifies patients with pulmonary embolism into classes of increasing risk of mortality and other adverse medical outcomes. Further validation of the rule is important before its implementation as a decision aid to guide the initial management of patients with pulmonary embolism.