976 resultados para Predictor model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to examine a model of personality and health. Specifically, this thesis examined perfectionism as a predictor of health status and health behaviours, as moderated by coping styles. A community sample of 813 young adults completed the Multidimensional Perfectionism Scale, the Coping Strategy Indicator, and measures of health symptoms, health care utilization, and various health behaviours. Multiple regression analyses revealed a number of significant findings. First, perfectionism and coping styles contributed significant main effects in predicting health status and health behaviours, although coping styles were not shown to moderate the perfectionism-health relationship. The data showed that perfectionism did constitute a health risk, both in terms of health status and health behaviours. Finally, an unexpected finding was that perfectionism also included adaptive features related to health. Specifically, some dimensions of perfectionism were also associated with reports of better health status and involvement in some positive health behaviours.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to replicate and extend a motivational model of problem drinking (Cooper, Frone, Russel, & Mudar, 1995; Read, Wood, Kahler, Maddock & Tibor, 2003), testing the notion that attachment is a common antecedent for both the affective and social paths to problem drinking. The model was tested with data from three samples, first-year university students (N=679), students about to graduate from university (N=206), and first-time clients at an addiction treatment facility (N=21 1). Participants completed a battery of questionnaires assessing alcohol use, alcohol-related consequences, drinking motives, peer models of alcohol use, positive and negative affect, attachment anxiety and attachment avoidance. Results underscored the importance of the affective path to problem drinking, while putting the social path to problem drinking into question. While drinking to cope was most prominent among the clinical sample, coping motives served as a risk factor for problem drinking for both individuals identified as problem drinkers and university students. Moreover, drinking for enhancement purposes appeared to be the strongest overall predictor of alcohol use. Results of the present study also supported the notion that attachment anxiety and avoidance are antecedents for the affective path to problem drinking, such that those with higher levels of attachment anxiety and avoidance were more vulnerable to experiencing adverse consequences related to their drinking, explained in terms of diminished affect regulation. Evidence that nonsecure attachment is a potent predictor of problem drinking was also demonstrated by the finding that attachment anxiety was directly related to alcohol-related consequences over and above its indirect relationship through affect regulation. However, results failed to show that attachment anxiety or attachment avoidance increased the risk of problem drinking via social influence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dehumanizing ideologies that explicitly liken other humans to “inferior” animals can have negative consequences for intergroup attitudes and relations. Surprisingly, very little is known about the causes of dehumanization, and essentially no research has examined strategies for reducing dehumanizing tendencies. The Interspecies Model of Prejudice specifies that animalistic dehumanization may be rooted in basic hierarchical beliefs regarding human superiority over animals. This theoretical reasoning suggests that narrowing the human-animal divide should also reduce dehumanization. The purpose of the present dissertation, therefore, was to gain a more complete understanding of the predictors of and solutions to dehumanization by examining the Interspecies Model of Prejudice, first from a layperson’s perspective and then among young children. In Study 1, laypeople strongly rejected the human-animal divide as a probable cause of, or solution to, dehumanization, despite evidence that their own personal beliefs in the human-animal divide positively predicted their dehumanization (and prejudice) scores. From Study 1, it was concluded that the human-animal divide, despite being a robust empirical predictor of dehumanization, is largely unrecognized as a probable cause of, or solution to, dehumanization by non-experts in the psychology of prejudice. Studies 2 and 3 explored the expression of dehumanization, as well as the Interspecies Model of Prejudice, among children ages six to ten years (Studies 2 and 3) and parents (Study 3). Across both studies, White children showed evidence of racial dehumanization by attributing a Black child target fewer “uniquely human” characteristics than the White child target, representing the first systematic evidence of racial dehumanization among children. In Study 3, path analyses supported the Interspecies Model of Prejudice among children. Specifically, children’s beliefs in the human-animal divide predicted greater racial prejudice, an effect explained by heightened racial dehumanization. Moreover, parents’ Social Dominance Orientation (preference for social hierarchy and inequality) positively predicted children’s human-animal divide beliefs. Critically, these effects remained significant even after controlling for established predictors of child-prejudice (i.e., parent prejudice, authoritarian parenting, and social-cognitive skills) and relevant child demographics (i.e., age and sex). Similar patterns emerged among parent participants, further supporting the Interspecies Model of Prejudice. Encouragingly, children reported narrower human-animal divide perceptions after being exposed to an experimental prime (versus control) that highlighted the similarities among humans and animals. Together the three studies reported in this dissertation offer important and novel contributions to the dehumanization and prejudice literature. Not only did we find the first systematic evidence of racial dehumanization among children, we established the human-animal divide as a meaningful dehumanization precursor. Moreover, empirical support was obtained for the Interspecies Model of Prejudice among diverse samples including university students (Study 1), children (Studies 2 and 3), and adult-aged samples (Study 3). Importantly, each study also highlights the promising social implication of targeting the human-animal divide in interventions to reduce dehumanization and other prejudicial processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modeling nonlinear systems using Volterra series is a century old method but practical realizations were hampered by inadequate hardware to handle the increased computational complexity stemming from its use. But interest is renewed recently, in designing and implementing filters which can model much of the polynomial nonlinearities inherent in practical systems. The key advantage in resorting to Volterra power series for this purpose is that nonlinear filters so designed can be made to work in parallel with the existing LTI systems, yielding improved performance. This paper describes the inclusion of a quadratic predictor (with nonlinearity order 2) with a linear predictor in an analog source coding system. Analog coding schemes generally ignore the source generation mechanisms but focuses on high fidelity reconstruction at the receiver. The widely used method of differential pnlse code modulation (DPCM) for speech transmission uses a linear predictor to estimate the next possible value of the input speech signal. But this linear system do not account for the inherent nonlinearities in speech signals arising out of multiple reflections in the vocal tract. So a quadratic predictor is designed and implemented in parallel with the linear predictor to yield improved mean square error performance. The augmented speech coder is tested on speech signals transmitted over an additive white gaussian noise (AWGN) channel.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of using information available from one variable X to make inferenceabout another Y is classical in many physical and social sciences. In statistics this isoften done via regression analysis where mean response is used to model the data. Onestipulates the model Y = µ(X) +ɛ. Here µ(X) is the mean response at the predictor variable value X = x, and ɛ = Y - µ(X) is the error. In classical regression analysis, both (X; Y ) are observable and one then proceeds to make inference about the mean response function µ(X). In practice there are numerous examples where X is not available, but a variable Z is observed which provides an estimate of X. As an example, consider the herbicidestudy of Rudemo, et al. [3] in which a nominal measured amount Z of herbicide was applied to a plant but the actual amount absorbed by the plant X is unobservable. As another example, from Wang [5], an epidemiologist studies the severity of a lung disease, Y , among the residents in a city in relation to the amount of certain air pollutants. The amount of the air pollutants Z can be measured at certain observation stations in the city, but the actual exposure of the residents to the pollutants, X, is unobservable and may vary randomly from the Z-values. In both cases X = Z+error: This is the so called Berkson measurement error model.In more classical measurement error model one observes an unbiased estimator W of X and stipulates the relation W = X + error: An example of this model occurs when assessing effect of nutrition X on a disease. Measuring nutrition intake precisely within 24 hours is almost impossible. There are many similar examples in agricultural or medical studies, see e.g., Carroll, Ruppert and Stefanski [1] and Fuller [2], , among others. In this talk we shall address the question of fitting a parametric model to the re-gression function µ(X) in the Berkson measurement error model: Y = µ(X) + ɛ; X = Z + η; where η and ɛ are random errors with E(ɛ) = 0, X and η are d-dimensional, and Z is the observable d-dimensional r.v.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introducción: una de las causas de pobre ganancia visual luego de un tratamiento exitoso de desprendimiento de retina, sin complicaciones, es el daño de los fotoreceptores, reflejada en una disrupción de la capa de la zona elipsoide y membrana limitante externa (MLE). En otras patologías se ha demostrado que la hiperautofluorescencia foveal se correlaciona con la integridad de la zona elipsoide y MLE y una mejor recuperación visual. Objetivos: evaluar la asociación entre la hiperautofluorescencia foveal, la integridad de la capa de la zona elipsoide y recuperación visual luego de desprendimiento de retina regmatógeno (DRR) exitosamente tratado. Evaluar la concordancia inter-evaluador de estos exámenes. Metodología: estudio de corte transversal de autofluorescencia foveal y tomografía óptica coherente macular de dominio espectral en 65 pacientes con DRR evaluados por 3 evaluadores independientes. La concordancia inter-evaluador se estudio mediante Kappa de Cohen y la asociación entre las diferentes variables mediante la prueba chi cuadrado y pruebas Z para comparación de proporciones. Resultados: La concordancia de la autofluorescencia fue razonable y la de la tomografía óptica coherente macular buena a muy buena. Sujetos que presentaron hiperautofluorescencia foveal asociada a integridad de la capa de la zona elipsoide tuvieron 20% más de posibilidad de recuperar agudeza visual final mejor a 20/50 que los que no cumplieron éstas características. Conclusión: Existe una asociación clínicamente importante entre la hiperautofluorescencia foveal, la integridad de la capa de zona elipsoide y la mejor agudeza visual final, sin embargo ésta no fue estadísticamente significativa (p=0.39)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this chapter we described how the inclusion of a model of a human arm, combined with the measurement of its neural input and a predictor, can provide to a previously proposed teleoperator design robustness under time delay. Our trials gave clear indications of the superiority of the NPT scheme over traditional as well as the modified Yokokohji and Yoshikawa architectures. Its fundamental advantages are: the time-lead of the slave, the more efficient, and providing a more natural feeling manipulation, and the fact that incorporating an operator arm model leads to more credible stability results. Finally, its simplicity allows less likely to fail local control techniques to be employed. However, a significant advantage for the enhanced Yokokohji and Yoshikawa architecture results from the very fact that it’s a conservative modification of current designs. Under large prediction errors, it can provide robustness through directing the master and slave states to their means and, since it relies on the passivity of the mechanical part of the system, it would not confuse the operator. An experimental implementation of the techniques will provide further evidence for the performance of the proposed architectures. The employment of neural networks and fuzzy logic, which will provide an adaptive model of the human arm and robustifying control terms, is scheduled for the near future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current European Union regulatory risk assessment allows application of pesticides provided that recovery of nontarget arthropods in-crop occurs within a year. Despite the long-established theory of source-sink dynamics, risk assessment ignores depletion of surrounding populations and typical field trials are restricted to plot-scale experiments. In the present study, the authors used agent-based modeling of 2 contrasting invertebrates, a spider and a beetle, to assess how the area of pesticide application and environmental half-life affect the assessment of recovery at the plot scale and impact the population at the landscape scale. Small-scale plot experiments were simulated for pesticides with different application rates and environmental half-lives. The same pesticides were then evaluated at the landscape scale (10 km × 10 km) assuming continuous year-on-year usage. The authors' results show that recovery time estimated from plot experiments is a poor indicator of long-term population impact at the landscape level and that the spatial scale of pesticide application strongly determines population-level impact. This raises serious doubts as to the utility of plot-recovery experiments in pesticide regulatory risk assessment for population-level protection. Predictions from the model are supported by empirical evidence from a series of studies carried out in the decade starting in 1988. The issues raised then can now be addressed using simulation. Prediction of impacts at landscape scales should be more widely used in assessing the risks posed by environmental stressors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Regional climate downscaling has arrived at an important juncture. Some in the research community favour continued refinement and evaluation of downscaling techniques within a broader framework of uncertainty characterisation and reduction. Others are calling for smarter use of downscaling tools, accepting that conventional, scenario-led strategies for adaptation planning have limited utility in practice. This paper sets out the rationale and new functionality of the Decision Centric (DC) version of the Statistical DownScaling Model (SDSM-DC). This tool enables synthesis of plausible daily weather series, exotic variables (such as tidal surge), and climate change scenarios guided, not determined, by climate model output. Two worked examples are presented. The first shows how SDSM-DC can be used to reconstruct and in-fill missing records based on calibrated predictor-predictand relationships. Daily temperature and precipitation series from sites in Africa, Asia and North America are deliberately degraded to show that SDSM-DC can reconstitute lost data. The second demonstrates the application of the new scenario generator for stress testing a specific adaptation decision. SDSM-DC is used to generate daily precipitation scenarios to simulate winter flooding in the Boyne catchment, Ireland. This sensitivity analysis reveals the conditions under which existing precautionary allowances for climate change might be insufficient. We conclude by discussing the wider implications of the proposed approach and research opportunities presented by the new tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A parallel formulation for the simulation of a branch prediction algorithm is presented. This parallel formulation identifies independent tasks in the algorithm which can be executed concurrently. The parallel implementation is based on the multithreading model and two parallel programming platforms: pthreads and Cilk++. Improvement in execution performance by up to 7 times is observed for a generic 2-bit predictor in a 12-core multiprocessor system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Instrumental observations, palaeo-proxies, and climate models suggest significant decadal variability within the North Atlantic subpolar gyre (NASPG). However, a poorly sampled observational record and a diversity of model behaviours mean that the precise nature and mechanisms of this variability are unclear. Here, we analyse an exceptionally large multi-model ensemble of 42 present-generation climate models to test whether NASPG mean state biases systematically affect the representation of decadal variability. Temperature and salinity biases in the Labrador Sea co-vary and influence whether density variability is controlled by temperature or salinity variations. Ocean horizontal resolution is a good predictor of the biases and the location of the dominant dynamical feedbacks within the NASPG. However, we find no link to the spectral characteristics of the variability. Our results suggest that the mean state and mechanisms of variability within the NASPG are not independent. This represents an important caveat for decadal predictions using anomaly-assimilation methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prediction of random effects is an important problem with expanding applications. In the simplest context, the problem corresponds to prediction of the latent value (the mean) of a realized cluster selected via two-stage sampling. Recently, Stanek and Singer [Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 119-130] developed best linear unbiased predictors (BLUP) under a finite population mixed model that outperform BLUPs from mixed models and superpopulation models. Their setup, however, does not allow for unequally sized clusters. To overcome this drawback, we consider an expanded finite population mixed model based on a larger set of random variables that span a higher dimensional space than those typically applied to such problems. We show that BLUPs for linear combinations of the realized cluster means derived under such a model have considerably smaller mean squared error (MSE) than those obtained from mixed models, superpopulation models, and finite population mixed models. We motivate our general approach by an example developed for two-stage cluster sampling and show that it faithfully captures the stochastic aspects of sampling in the problem. We also consider simulation studies to illustrate the increased accuracy of the BLUP obtained under the expanded finite population mixed model. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents for the first time how to easily incorporate facts devices in an optimal active power flow model such that an efficient interior-point method may be applied. The optimal active power flow model is based on a network flow approach instead of the traditional nodal formulation that allows the use of an efficiently predictor-corrector interior point method speed up by sparsity exploitation. The mathematical equivalence between the network flow and the nodal models is addressed, as well as the computational advantages of the former considering the solution by interior point methods. The adequacy of the network flow model for representing facts devices is presented and illustrated on a small 5-bus system. The model was implemented using Matlab and its performance was evaluated with the 3,397-bus and 4,075-branch Brazilian power system which show the robustness and efficiency of the formulation proposed. The numerical results also indicate an efficient tool for optimal active power flow that is suitable for incorporating facts devices.