932 resultados para predictive power
Resumo:
Après avoir présenté brièvement deux courants conceptuels à propos de l’intelligence émotionnelle (IÉ), nous abordons l’impact de celle-ci sur sept aspects du monde du travail : le leadership, la formation du personnel, la sélection du personnel, le rendement à la tâche, la gestion des conflits, les attitudes au travail et le bien-être au travail. Alors que l’intelligence émotionnelle promettait au départ de bouleverser le monde du travail, un grand nombre d’études montre que celle-ci ne fait pas le poids, particulièrement au plan de sa validité. Comparée à la valeur prédictive du quotient intellectuel (QI), l’intelligence émotionnelle montre un faible pouvoir prédicteur, en dépit des instruments judicieux que ses promoteurs ont mis au point pour mesurer ses effets.
Resumo:
The purpose of this study was to assess the intention to exercise among ethnically and racially diverse community college students using the Theory of Planned Behavior (TPB). In addition to identifying the variables associated with motivation or intention of college students to engage in physical activity, this study tested the model of the Theory of Planned Behavior, asking: Does the TPB model explain intention to exercise among a racially/ethnically diverse group of college students? The relevant variables were the TPB constructs (behavioral beliefs, normative beliefs, and control beliefs), which combined to form a measure of intention to exercise. Structural Equation Modeling was used to test the predictive power of the TPB constructs for predicting intention to exercise. Following procedures described by Ajzen (2002), the researcher developed a questionnaire encompassing the external variables of student demographics (age, gender, work status, student status, socio-economic status, access to exercise facilities, and past behavior), major constructs of the TPB, and two questions from the Godin Leisure Time Questionnaire (GLTQ; Godin & Shephard, 1985). Participants were students (N = 255) who enrolled in an on-campus wellness course at an urban community college. The demographic profile of the sample revealed a racially/ethnically diverse study population. The original model that was used to reflect the TPB as developed by Ajzen was not supported by the data analyzed using SEM; however, a revised model that the researcher thought was theoretically a more accurate reflection of the causal relations between the TPB constructs was supported. The GLTQ questions were problematic for some students; those data could not be used in the modeling efforts. The GLTQ measure, however, revealed a significant correlation with intention to exercise (r = .27, p = .001). Post-hoc comparisons revealed significant differences in normative beliefs and attitude toward exercising behavior between Black students and Hispanic students. Compared to Black students, Hispanic students were more likely to (a) perceive “friends” as approving of them being physically active and (b) rate being physically active for 30 minutes per day as “beneficial”. No statistically significant difference was found among groups on overall intention to exercise.
Resumo:
Phenotypic plasticity describes the phenotypic adjustment of the same genotype to different environmental conditions and is best described by a reaction norm. We focus on the effect of ocean acidification (OA) on inter - and intraspecific reaction norms of three globally important phytoplankton species (Emiliania huxleyi, Gephyrocapsa oceanica, Chaetoceros affinis). Despite significant differences in growth rates between the species, they all showed a high potential for phenotypic buffering (no significant difference in growth rates between ambient and high CO2 condition). Only three coccolithophore genotypes showed a reduced growth in high CO2. Largely diverging responses to high CO2 of single coc-colithophore genotypes compared to the respective mean species responses, however, raise the question if an extrapolation to the population level is possible from single genotype experiments. We therefore compared the mean response of all tested genotypes to a total species response comprising the same genotypes, which was not significantly different in the coccolithophores. Assessing species reac-tion norm to different environmental conditions on short time scale in a genotype-mix could thus reduce sampling effort while increasing predictive power.
Resumo:
Cette thèse développe des méthodes bootstrap pour les modèles à facteurs qui sont couram- ment utilisés pour générer des prévisions depuis l'article pionnier de Stock et Watson (2002) sur les indices de diffusion. Ces modèles tolèrent l'inclusion d'un grand nombre de variables macroéconomiques et financières comme prédicteurs, une caractéristique utile pour inclure di- verses informations disponibles aux agents économiques. Ma thèse propose donc des outils éco- nométriques qui améliorent l'inférence dans les modèles à facteurs utilisant des facteurs latents extraits d'un large panel de prédicteurs observés. Il est subdivisé en trois chapitres complémen- taires dont les deux premiers en collaboration avec Sílvia Gonçalves et Benoit Perron. Dans le premier article, nous étudions comment les méthodes bootstrap peuvent être utilisées pour faire de l'inférence dans les modèles de prévision pour un horizon de h périodes dans le futur. Pour ce faire, il examine l'inférence bootstrap dans un contexte de régression augmentée de facteurs où les erreurs pourraient être autocorrélées. Il généralise les résultats de Gonçalves et Perron (2014) et propose puis justifie deux approches basées sur les résidus : le block wild bootstrap et le dependent wild bootstrap. Nos simulations montrent une amélioration des taux de couverture des intervalles de confiance des coefficients estimés en utilisant ces approches comparativement à la théorie asymptotique et au wild bootstrap en présence de corrélation sérielle dans les erreurs de régression. Le deuxième chapitre propose des méthodes bootstrap pour la construction des intervalles de prévision permettant de relâcher l'hypothèse de normalité des innovations. Nous y propo- sons des intervalles de prédiction bootstrap pour une observation h périodes dans le futur et sa moyenne conditionnelle. Nous supposons que ces prévisions sont faites en utilisant un ensemble de facteurs extraits d'un large panel de variables. Parce que nous traitons ces facteurs comme latents, nos prévisions dépendent à la fois des facteurs estimés et les coefficients de régres- sion estimés. Sous des conditions de régularité, Bai et Ng (2006) ont proposé la construction d'intervalles asymptotiques sous l'hypothèse de Gaussianité des innovations. Le bootstrap nous permet de relâcher cette hypothèse et de construire des intervalles de prédiction valides sous des hypothèses plus générales. En outre, même en supposant la Gaussianité, le bootstrap conduit à des intervalles plus précis dans les cas où la dimension transversale est relativement faible car il prend en considération le biais de l'estimateur des moindres carrés ordinaires comme le montre une étude récente de Gonçalves et Perron (2014). Dans le troisième chapitre, nous suggérons des procédures de sélection convergentes pour les regressions augmentées de facteurs en échantillons finis. Nous démontrons premièrement que la méthode de validation croisée usuelle est non-convergente mais que sa généralisation, la validation croisée «leave-d-out» sélectionne le plus petit ensemble de facteurs estimés pour l'espace généré par les vraies facteurs. Le deuxième critère dont nous montrons également la validité généralise l'approximation bootstrap de Shao (1996) pour les regressions augmentées de facteurs. Les simulations montrent une amélioration de la probabilité de sélectionner par- cimonieusement les facteurs estimés comparativement aux méthodes de sélection disponibles. L'application empirique revisite la relation entre les facteurs macroéconomiques et financiers, et l'excès de rendement sur le marché boursier américain. Parmi les facteurs estimés à partir d'un large panel de données macroéconomiques et financières des États Unis, les facteurs fortement correlés aux écarts de taux d'intérêt et les facteurs de Fama-French ont un bon pouvoir prédictif pour les excès de rendement.
Resumo:
Verbal fluency is the ability to produce a satisfying sequence of spoken words during a given time interval. The core of verbal fluency lies in the capacity to manage the executive aspects of language. The standard scores of the semantic verbal fluency test are broadly used in the neuropsychological assessment of the elderly, and different analytical methods are likely to extract even more information from the data generated in this test. Graph theory, a mathematical approach to analyze relations between items, represents a promising tool to understand a variety of neuropsychological states. This study reports a graph analysis of data generated by the semantic verbal fluency test by cognitively healthy elderly (NC), patients with Mild Cognitive Impairment – subtypes amnestic(aMCI) and amnestic multiple domain (a+mdMCI) - and patients with Alzheimer’s disease (AD). Sequences of words were represented as a speech graph in which every word corresponded to a node and temporal links between words were represented by directed edges. To characterize the structure of the data we calculated 13 speech graph attributes (SGAs). The individuals were compared when divided in three (NC – MCI – AD) and four (NC – aMCI – a+mdMCI – AD) groups. When the three groups were compared, significant differences were found in the standard measure of correct words produced, and three SGA: diameter, average shortest path, and network density. SGA sorted the elderly groups with good specificity and sensitivity. When the four groups were compared, the groups differed significantly in network density, except between the two MCI subtypes and NC and aMCI. The diameter of the network and the average shortest path were significantly different between the NC and AD, and between aMCI and AD. SGA sorted the elderly in their groups with good specificity and sensitivity, performing better than the standard score of the task. These findings provide support for a new methodological frame to assess the strength of semantic memory through the verbal fluency task, with potential to amplify the predictive power of this test. Graph analysis is likely to become clinically relevant in neurology and psychiatry, and may be particularly useful for the differential diagnosis of the elderly.
Resumo:
Background. The value of respiratory variables as weaning predictors in the intensive care unit (ICU) is controversial. We evaluated the ability of tidal volume (Vtexp), respiratory rate ( f ), minute volume (MVexp), rapid shallow breathing index ( f/Vt), inspired–expired oxygen concentration difference [(I–E)O2], and end-tidal carbon dioxide concentration (PE′CO2) at the end of a weaning trial to predict early weaning outcomes. Methods. Seventy-three patients who required .24 h of mechanical ventilation were studied. A controlled pressure support weaning trial was undertaken until 5 cm H2O continuous positive airway pressure or predefined criteria were reached. The ability of data from the last 5 min of the trial to predict whether a predefined endpoint indicating discontinuation of ventilator support within the next 24 h was evaluated. Results. Pre-test probability for achieving the outcome was 44% in the cohort (n¼32). Non-achievers were older, had higher APACHE II and organ failure scores before the trial, and higher baseline arterial H+ concentrations. The Vt, MV, f, and f/Vt had no predictive power using a range of cut-off values or from receiver operating characteristic (ROC) analysis. The [I–E]O2 and PE′CO2 had weak discriminatory power [areaunder the ROC curve: [I–E]O2 0.64 (P¼0.03); PE′CO2 0.63 (P¼0.05)]. Using best cut-off values for [I–E]O2 of 5.6% and PE′CO2 of 5.1 kPa, positive and negative likelihood ratios were 2 and 0.5, respectively, which only changed the pre- to post-test probability by about 20%. Conclusions. In unselected ICU patients, respiratory variables predict early weaning from mechanical ventilation poorly.
Resumo:
The main purpose of the current study was to examine the role of vocabulary knowledge (VK) and syntactic knowledge (SK) in L2 listening comprehension, as well as their relative significance. Unlike previous studies, the current project employed assessment tasks to measure aural and proceduralized VK and SK. In terms of VK, to avoid under-representing the construct, measures of both breadth (VB) and depth (VD) were included. Additionally, the current study examined the role of VK and SK by accounting for individual differences in two important cognitive factors in L2 listening: metacognitive knowledge (MK) and working memory (WM). Also, to explore the role of VK and SK more fully, the current study accounted for the negative impact of anxiety on WM and L2 listening. The study was carried out in an English as a Foreign Language (EFL) context, and participants were 263 Iranian learners at a wide range of English proficiency from lower-intermediate to advanced. Participants took a battery of ten linguistic, cognitive and affective measures. Then, the collected data were subjected to several preliminary analyses, but structural equation modeling (SEM) was then used as the primary analysis method to answer the study research questions. Results of the preliminary analyses revealed that MK and WM were significant predictors of L2 listening ability; thus, they were kept in the main SEM analyses. The significant role of WM was only observed when the negative effect of anxiety on WM was accounted for. Preliminary analyses also showed that VB and VD were not distinct measures of VK. However, the results also showed that if VB and VD were considered separate, VD was a better predictor of L2 listening success. The main analyses of the current study revealed a significant role for both VK and SK in explaining success in L2 listening comprehension, which differs from findings from previous empirical studies. However, SEM analysis did not reveal a statistically significant difference in terms of the predictive power of the two linguistic factors. Descriptive results of the SEM analysis, along with results from regression analysis, indicated to a more significant role for VK.
Resumo:
In this paper we consider a class of scalar integral equations with a form of space-dependent delay. These non-local models arise naturally when modelling neural tissue with active axons and passive dendrites. Such systems are known to support a dynamic (oscillatory) Turing instability of the homogeneous steady state. In this paper we develop a weakly nonlinear analysis of the travelling and standing waves that form beyond the point of instability. The appropriate amplitude equations are found to be the coupled mean-field Ginzburg-Landau equations describing a Turing-Hopf bifurcation with modulation group velocity of O(1). Importantly we are able to obtain the coefficients of terms in the amplitude equations in terms of integral transforms of the spatio-temporal kernels defining the neural field equation of interest. Indeed our results cover not only models with axonal or dendritic delays but those which are described by a more general distribution of delayed spatio-temporal interactions. We illustrate the predictive power of this form of analysis with comparison against direct numerical simulations, paying particular attention to the competition between standing and travelling waves and the onset of Benjamin-Feir instabilities.
Resumo:
Considering the social and economic importance that the milk has, the objective of this study was to evaluate the incidence and quantifying antimicrobial residues in the food. The samples were collected in dairy industry of southwestern Paraná state and thus they were able to cover all ten municipalities in the region of Pato Branco. The work focused on the development of appropriate models for the identification and quantification of analytes: tetracycline, sulfamethazine, sulfadimethoxine, chloramphenicol and ampicillin, all antimicrobials with health interest. For the calibration procedure and validation of the models was used the Infrared Spectroscopy Fourier Transform associated with chemometric method based on Partial Least Squares regression (PLS - Partial Least Squares). To prepare a work solution antimicrobials, the five analytes of interest were used in increasing doses, namely tetracycline from 0 to 0.60 ppm, sulfamethazine 0 to 0.12 ppm, sulfadimethoxine 0 to 2.40 ppm chloramphenicol 0 1.20 ppm and ampicillin 0 to 1.80 ppm to perform the work with the interest in multiresidues analysis. The performance of the models constructed was evaluated through the figures of merit: mean square error of calibration and cross-validation, correlation coefficients and offset performance ratio. For the purposes of applicability in this work, it is considered that the models generated for Tetracycline, Sulfadimethoxine and Chloramphenicol were considered viable, with the greatest predictive power and efficiency, then were employed to evaluate the quality of raw milk from the region of Pato Branco . Among the analyzed samples by NIR, 70% were in conformity with sanitary legislation, and 5% of these samples had concentrations below the Maximum Residue permitted, and is also satisfactory. However 30% of the sample set showed unsatisfactory results when evaluating the contamination with antimicrobials residues, which is non conformity related to the presence of antimicrobial unauthorized use or concentrations above the permitted limits. With the development of this work can be said that laboratory tests in the food area, using infrared spectroscopy with multivariate calibration was also good, fast in analysis, reduced costs and with minimum generation of laboratory waste. Thus, the alternative method proposed meets the quality concerns and desired efficiency by industrial sectors and society in general.
Resumo:
Many exchange rate papers articulate the view that instabilities constitute a major impediment to exchange rate predictability. In this thesis we implement Bayesian and other techniques to account for such instabilities, and examine some of the main obstacles to exchange rate models' predictive ability. We first consider in Chapter 2 a time-varying parameter model in which fluctuations in exchange rates are related to short-term nominal interest rates ensuing from monetary policy rules, such as Taylor rules. Unlike the existing exchange rate studies, the parameters of our Taylor rules are allowed to change over time, in light of the widespread evidence of shifts in fundamentals - for example in the aftermath of the Global Financial Crisis. Focusing on quarterly data frequency from the crisis, we detect forecast improvements upon a random walk (RW) benchmark for at least half, and for as many as seven out of 10, of the currencies considered. Results are stronger when we allow the time-varying parameters of the Taylor rules to differ between countries. In Chapter 3 we look closely at the role of time-variation in parameters and other sources of uncertainty in hindering exchange rate models' predictive power. We apply a Bayesian setup that incorporates the notion that the relevant set of exchange rate determinants and their corresponding coefficients, change over time. Using statistical and economic measures of performance, we first find that predictive models which allow for sudden, rather than smooth, changes in the coefficients yield significant forecast improvements and economic gains at horizons beyond 1-month. At shorter horizons, however, our methods fail to forecast better than the RW. And we identify uncertainty in coefficients' estimation and uncertainty about the precise degree of coefficients variability to incorporate in the models, as the main factors obstructing predictive ability. Chapter 4 focus on the problem of the time-varying predictive ability of economic fundamentals for exchange rates. It uses bootstrap-based methods to uncover the time-specific conditioning information for predicting fluctuations in exchange rates. Employing several metrics for statistical and economic evaluation of forecasting performance, we find that our approach based on pre-selecting and validating fundamentals across bootstrap replications generates more accurate forecasts than the RW. The approach, known as bumping, robustly reveals parsimonious models with out-of-sample predictive power at 1-month horizon; and outperforms alternative methods, including Bayesian, bagging, and standard forecast combinations. Chapter 5 exploits the predictive content of daily commodity prices for monthly commodity-currency exchange rates. It builds on the idea that the effect of daily commodity price fluctuations on commodity currencies is short-lived, and therefore harder to pin down at low frequencies. Using MIxed DAta Sampling (MIDAS) models, and Bayesian estimation methods to account for time-variation in predictive ability, the chapter demonstrates the usefulness of suitably exploiting such short-lived effects in improving exchange rate forecasts. It further shows that the usual low-frequency predictors, such as money supplies and interest rates differentials, typically receive little support from the data at monthly frequency, whereas MIDAS models featuring daily commodity prices are highly likely. The chapter also introduces the random walk Metropolis-Hastings technique as a new tool to estimate MIDAS regressions.
Resumo:
Verbal fluency is the ability to produce a satisfying sequence of spoken words during a given time interval. The core of verbal fluency lies in the capacity to manage the executive aspects of language. The standard scores of the semantic verbal fluency test are broadly used in the neuropsychological assessment of the elderly, and different analytical methods are likely to extract even more information from the data generated in this test. Graph theory, a mathematical approach to analyze relations between items, represents a promising tool to understand a variety of neuropsychological states. This study reports a graph analysis of data generated by the semantic verbal fluency test by cognitively healthy elderly (NC), patients with Mild Cognitive Impairment – subtypes amnestic(aMCI) and amnestic multiple domain (a+mdMCI) - and patients with Alzheimer’s disease (AD). Sequences of words were represented as a speech graph in which every word corresponded to a node and temporal links between words were represented by directed edges. To characterize the structure of the data we calculated 13 speech graph attributes (SGAs). The individuals were compared when divided in three (NC – MCI – AD) and four (NC – aMCI – a+mdMCI – AD) groups. When the three groups were compared, significant differences were found in the standard measure of correct words produced, and three SGA: diameter, average shortest path, and network density. SGA sorted the elderly groups with good specificity and sensitivity. When the four groups were compared, the groups differed significantly in network density, except between the two MCI subtypes and NC and aMCI. The diameter of the network and the average shortest path were significantly different between the NC and AD, and between aMCI and AD. SGA sorted the elderly in their groups with good specificity and sensitivity, performing better than the standard score of the task. These findings provide support for a new methodological frame to assess the strength of semantic memory through the verbal fluency task, with potential to amplify the predictive power of this test. Graph analysis is likely to become clinically relevant in neurology and psychiatry, and may be particularly useful for the differential diagnosis of the elderly.
Resumo:
Cette thèse développe des méthodes bootstrap pour les modèles à facteurs qui sont couram- ment utilisés pour générer des prévisions depuis l'article pionnier de Stock et Watson (2002) sur les indices de diffusion. Ces modèles tolèrent l'inclusion d'un grand nombre de variables macroéconomiques et financières comme prédicteurs, une caractéristique utile pour inclure di- verses informations disponibles aux agents économiques. Ma thèse propose donc des outils éco- nométriques qui améliorent l'inférence dans les modèles à facteurs utilisant des facteurs latents extraits d'un large panel de prédicteurs observés. Il est subdivisé en trois chapitres complémen- taires dont les deux premiers en collaboration avec Sílvia Gonçalves et Benoit Perron. Dans le premier article, nous étudions comment les méthodes bootstrap peuvent être utilisées pour faire de l'inférence dans les modèles de prévision pour un horizon de h périodes dans le futur. Pour ce faire, il examine l'inférence bootstrap dans un contexte de régression augmentée de facteurs où les erreurs pourraient être autocorrélées. Il généralise les résultats de Gonçalves et Perron (2014) et propose puis justifie deux approches basées sur les résidus : le block wild bootstrap et le dependent wild bootstrap. Nos simulations montrent une amélioration des taux de couverture des intervalles de confiance des coefficients estimés en utilisant ces approches comparativement à la théorie asymptotique et au wild bootstrap en présence de corrélation sérielle dans les erreurs de régression. Le deuxième chapitre propose des méthodes bootstrap pour la construction des intervalles de prévision permettant de relâcher l'hypothèse de normalité des innovations. Nous y propo- sons des intervalles de prédiction bootstrap pour une observation h périodes dans le futur et sa moyenne conditionnelle. Nous supposons que ces prévisions sont faites en utilisant un ensemble de facteurs extraits d'un large panel de variables. Parce que nous traitons ces facteurs comme latents, nos prévisions dépendent à la fois des facteurs estimés et les coefficients de régres- sion estimés. Sous des conditions de régularité, Bai et Ng (2006) ont proposé la construction d'intervalles asymptotiques sous l'hypothèse de Gaussianité des innovations. Le bootstrap nous permet de relâcher cette hypothèse et de construire des intervalles de prédiction valides sous des hypothèses plus générales. En outre, même en supposant la Gaussianité, le bootstrap conduit à des intervalles plus précis dans les cas où la dimension transversale est relativement faible car il prend en considération le biais de l'estimateur des moindres carrés ordinaires comme le montre une étude récente de Gonçalves et Perron (2014). Dans le troisième chapitre, nous suggérons des procédures de sélection convergentes pour les regressions augmentées de facteurs en échantillons finis. Nous démontrons premièrement que la méthode de validation croisée usuelle est non-convergente mais que sa généralisation, la validation croisée «leave-d-out» sélectionne le plus petit ensemble de facteurs estimés pour l'espace généré par les vraies facteurs. Le deuxième critère dont nous montrons également la validité généralise l'approximation bootstrap de Shao (1996) pour les regressions augmentées de facteurs. Les simulations montrent une amélioration de la probabilité de sélectionner par- cimonieusement les facteurs estimés comparativement aux méthodes de sélection disponibles. L'application empirique revisite la relation entre les facteurs macroéconomiques et financiers, et l'excès de rendement sur le marché boursier américain. Parmi les facteurs estimés à partir d'un large panel de données macroéconomiques et financières des États Unis, les facteurs fortement correlés aux écarts de taux d'intérêt et les facteurs de Fama-French ont un bon pouvoir prédictif pour les excès de rendement.
Resumo:
International audience
Resumo:
Hepatocyte growth factor (HGF) plays a role in the improvement of cardiac function and remodeling. Their serum levels are strongly related with mortality in chronic systolic heart failure (HF). The aim of this study was to study prognostic value of HGF in acute HF, interaction with ejection fraction, renal function, and natriuretic peptides. We included 373 patients (age 76 ± 10 years, left ventricular ejection fraction [LVEF] 46 ± 14%, 48% men) consecutively admitted for acute HF. Blood samples were obtained at admission. All patients were followed up until death or close of study (>1 year, median 371 days). HGF concentrations were determined using a commercial enzyme-linked immunosorbent assay (human HGF immunoassay). The predictive power of HGF was estimated by Cox regression with calculation of Harrell C-statistic. HGF had a median of 1,942 pg/ml (interquartile rank 1,354). According to HGF quartiles, mortality rates (per 1,000 patients/year) were 98, 183, 375, and 393, respectively (p <0.001). In Cox regression analysis, HGF (hazard ratio1SD = 1.5, 95% confidence interval 1.1 to 2.1, p = 0.002) and N-terminal pro b-type natriuretic peptide (NT-proBNP; hazard ratio1SD = 1.8, 95% confidence interval 1.2 to 2.6, p = 0.002) were independent predictors of mortality. Interaction between HGF and LVEF, origin, and renal function was nonsignificant. The addition of HGF improved the predictive ability of the models (C-statistic 0.768 vs 0.741, p = 0.016). HGF showed a complementary value over NT-proBNP (p = 0.001): mortality rate was 490 with both above the median versus 72 with both below. In conclusion, in patients with acute HF, serum HGF concentrations are elevated and identify patients at higher risk of mortality, regardless of LVEF, ischemic origin, or renal function. HGF had independent and additive information over NT-proBNP.
Resumo:
The concept of patient activation has gained traction as the term referring to patients who understand their role in the care process and have “the knowledge, skills and confidence” necessary to manage their illness over time (Hibbard & Mahoney, 2010). Improving health outcomes for vulnerable and underserved populations who bear a disproportionate burden of health disparities presents unique challenges for nurse practitioners who provide primary care in nurse-managed health centers. Evidence that activation improves patient self-management is prompting the search for theory-based self-management support interventions to activate patients for self-management, improve health outcomes, and sustain long-term gains. Yet, no previous studies investigated the relationship between Self-determination Theory (SDT; Deci & Ryan, 2000) and activation. The major purpose of this study, guided by the Triple Aim (Berwick, Nolan, & Whittington, 2008) and nested in the Chronic Care Model (Wagner et al., 2001), was to examine the degree to which two constructs– Autonomy Support and Autonomous Motivation– independently predicted Patient Activation, controlling for covariates. For this study, 130 nurse-managed health center patients completed an on-line 38-item survey onsite. The two independent measures were the 6-item Modified Health Care Climate Questionnaire (mHCCQ; Williams, McGregor, King, Nelson, & Glasgow, 2005; Cronbach’s alpha =0.89) and the 8-item adapted Treatment Self-Regulation Questionnaire (TSRQ; Williams, Freedman, & Deci, 1998; Cronbach’s alpha = 0.80). The Patient Activation Measure (PAM-13; Hibbard, Mahoney, Stock, & Tusler, 2005; Cronbach’s alpha = 0.89) was the dependent measure. Autonomy Support was the only significant predictor, explaining 19.1% of the variance in patient activation. Five of six autonomy support survey items regressed on activation were significant, illustrating autonomy supportive communication styles contributing to activation. These results suggest theory-based patient, provider, and system level interventions to enhance self-management in primary care and educational and professional development curricula. Future investigations should examine additional sources of autonomy support and different measurements of autonomous motivation to improve the predictive power of the model. Longitudinal analyses should be conducted to further understand the relationship between autonomy support and autonomous motivation with patient activation, based on the premise that patient activation will sustain behavior change.