28 resultados para no-net-loss goal


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Obesity has spread to all segments of the U.S. population. Young adults, aged 18-35 years, are rarely represented in clinical weight loss trials. We conducted a qualitative study to identify factors that may facilitate recruitment of young adults into a weight loss intervention trial. Participants were 33 adults aged 18-35 years with BMI ≥25 kg/m(2). Six group discussions were conducted using the nominal group technique. Health, social image, and "self" factors such as emotions, self-esteem, and confidence were reported as reasons to pursue weight loss. Physical activity, dietary intake, social support, medical intervention, and taking control (e.g. being motivated) were perceived as the best weight loss strategies. Incentives, positive outcomes, education, convenience, and social support were endorsed as reasons young adults would consider participating in a weight loss study. Incentives, advertisement, emphasizing benefits, and convenience were endorsed as ways to recruit young adults. These results informed the Cellphone Intervention for You (CITY) marketing and advertising, including message framing and advertising avenues. Implications for recruitment methods are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: The wide range of complex photic systems observed in birds exemplifies one of their key evolutionary adaptions, a well-developed visual system. However, genomic approaches have yet to be used to disentangle the evolutionary mechanisms that govern evolution of avian visual systems. RESULTS: We performed comparative genomic analyses across 48 avian genomes that span extant bird phylogenetic diversity to assess evolutionary changes in the 17 representatives of the opsin gene family and five plumage coloration genes. Our analyses suggest modern birds have maintained a repertoire of up to 15 opsins. Synteny analyses indicate that PARA and PARIE pineal opsins were lost, probably in conjunction with the degeneration of the parietal organ. Eleven of the 15 avian opsins evolved in a non-neutral pattern, confirming the adaptive importance of vision in birds. Visual conopsins sw1, sw2 and lw evolved under negative selection, while the dim-light RH1 photopigment diversified. The evolutionary patterns of sw1 and of violet/ultraviolet sensitivity in birds suggest that avian ancestors had violet-sensitive vision. Additionally, we demonstrate an adaptive association between the RH2 opsin and the MC1R plumage color gene, suggesting that plumage coloration has been photic mediated. At the intra-avian level we observed some unique adaptive patterns. For example, barn owl showed early signs of pseudogenization in RH2, perhaps in response to nocturnal behavior, and penguins had amino acid deletions in RH2 sites responsible for the red shift and retinal binding. These patterns in the barn owl and penguins were convergent with adaptive strategies in nocturnal and aquatic mammals, respectively. CONCLUSIONS: We conclude that birds have evolved diverse opsin adaptations through gene loss, adaptive selection and coevolution with plumage coloration, and that differentiated selective patterns at the species level suggest novel photic pressures to influence evolutionary patterns of more-recent lineages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Illicit cigarettes comprise more than 11% of tobacco consumption and 17% of consumption in low- and middle-income countries. Illicit cigarettes, defined as those that evade taxes, lower consumer prices, threaten national tobacco control efforts, and reduce excise tax collection. METHODS: This paper measures the magnitude of illicit cigarette consumption within Indonesia using two methods: the discrepancies between legal cigarette sales and domestic consumption estimated from surveys, and discrepancies between imports recorded by Indonesia and exports recorded by trade partners. Smuggling plays a minor role in the availability of illicit cigarettes because Indonesians predominantly consume kreteks, which are primarily manufactured in Indonesia. RESULTS: Looking at the period from 1995 to 2013, illicit cigarettes first emerged in 2004. When no respondent under-reporting is assumed, illicit consumption makes up 17% of the domestic market in 2004, 9% in 2007, 11% in 2011, and 8% in 2013. Discrepancies in the trade data indicate that Indonesia was a recipient of smuggled cigarettes for each year between 1995 and 2012. The value of this illicit trade ranges from less than $1 million to nearly $50 million annually. Singapore, China, and Vietnam together accounted for nearly two-thirds of trade discrepancies over the period. Tax losses due to illicit consumption amount to between Rp 4.1 and 9.3 trillion rupiah, 4% to 13% of tobacco excise revenue, in 2011 and 2013. CONCLUSIONS: Due to the predominance of kretek consumption in Indonesia and Indonesia's status as the predominant producer of kreteks, illicit domestic production is likely the most important source for illicit cigarettes, and initiatives targeted to combat this illicit production carry the promise of the greatest potential impact.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Involuntary job loss is a major life event associated with social, economic, behavioural, and health outcomes, for which older workers are at elevated risk. OBJECTIVE: To assess the 10 year risk of myocardial infarction (MI) and stroke associated with involuntary job loss among workers over 50 years of age. METHODS: Analysing data from the nationally representative US Health and Retirement Survey (HRS), Cox proportional hazards analysis was used to estimate whether workers who suffered involuntary job loss were at higher risk for subsequent MI and stroke than individuals who continued to work. The sample included 4301 individuals who were employed at the 1992 study baseline. RESULTS: Over the 10 year study frame, 582 individuals (13.5% of the sample) experienced involuntary job loss. After controlling for established predictors of the outcomes, displaced workers had a more than twofold increase in the risk of subsequent MI (hazard ratio (HR) = 2.48; 95% confidence interval (CI) = 1.49 to 4.14) and stroke (HR = 2.43; 95% CI = 1.18 to 4.98) relative to working persons. CONCLUSION: Results suggest that the true costs of late career unemployment exceed financial deprivation, and include substantial health consequences. Physicians who treat individuals who lose jobs as they near retirement should consider the loss of employment a potential risk factor for adverse vascular health changes. Policy makers and programme planners should also be aware of the risks of job loss, so that programmatic interventions can be designed and implemented to ease the multiple burdens of joblessness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although people frequently pursue multiple goals simultaneously, these goals often conflict with each other. For instance, consumers may have both a healthy eating goal and a goal to have an enjoyable eating experience. In this dissertation, I focus on two sources of enjoyment in eating experiences that may conflict with healthy eating: consuming tasty food (Essay 1) and affiliating with indulging dining companions (Essay 2). In both essays, I examine solutions and strategies that decrease the conflict between healthy eating and these aspects of enjoyment in the eating experience, thereby enabling consumers to resolve such goal conflicts.

Essay 1 focuses on the well-established conflict between having healthy food and having tasty food and introduces a novel product offering (“vice-virtue bundles”) that can help consumers simultaneously address both health and taste goals. Through several experiments, I demonstrate that consumers often choose vice-virtue bundles with small proportions (¼) of vice and that they view such bundles as healthier than but equally tasty as bundles with larger vice proportions, indicating that “healthier” does not always have to equal “less tasty.”

Essay 2 focuses on a conflict between healthy eating and affiliation with indulging dining companions. The first set of experiments provides evidence of this conflict and examine why it arises (Studies 1 to 3). Based on this conflict’s origins, the second set of experiments tests strategies that consumers can use to decrease the conflict between healthy eating and affiliation with an indulging dining companion (Studies 4 and 5), such that they can make healthy food choices while still being liked by an indulging dining companion. Thus, Essay 2 broadens the existing picture of goals that conflict with the healthy eating goal and, together with Essay 1, identifies solutions to such goal conflicts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

© 2014, Canadian Anesthesiologists' Society.Optimal perioperative fluid management is an important component of Enhanced Recovery After Surgery (ERAS) pathways. Fluid management within ERAS should be viewed as a continuum through the preoperative, intraoperative, and postoperative phases. Each phase is important for improving patient outcomes, and suboptimal care in one phase can undermine best practice within the rest of the ERAS pathway. The goal of preoperative fluid management is for the patient to arrive in the operating room in a hydrated and euvolemic state. To achieve this, prolonged fasting is not recommended, and routine mechanical bowel preparation should be avoided. Patients should be encouraged to ingest a clear carbohydrate drink two to three hours before surgery. The goals of intraoperative fluid management are to maintain central euvolemia and to avoid excess salt and water. To achieve this, patients undergoing surgery within an enhanced recovery protocol should have an individualized fluid management plan. As part of this plan, excess crystalloid should be avoided in all patients. For low-risk patients undergoing low-risk surgery, a “zero-balance” approach might be sufficient. In addition, for most patients undergoing major surgery, individualized goal-directed fluid therapy (GDFT) is recommended. Ultimately, however, the additional benefit of GDFT should be determined based on surgical and patient risk factors. Postoperatively, once fluid intake is established, intravenous fluid administration can be discontinued and restarted only if clinically indicated. In the absence of other concerns, detrimental postoperative fluid overload is not justified and “permissive oliguria” could be tolerated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Copyright © 2014 International Anesthesia Research Society.BACKGROUND: Goal-directed fluid therapy (GDFT) is associated with improved outcomes after surgery. The esophageal Doppler monitor (EDM) is widely used, but has several limitations. The NICOM, a completely noninvasive cardiac output monitor (Cheetah Medical), may be appropriate for guiding GDFT. No prospective studies have compared the NICOM and the EDM. We hypothesized that the NICOM is not significantly different from the EDM for monitoring during GDFT. METHODS: One hundred adult patients undergoing elective colorectal surgery participated in this study. Patients in phase I (n = 50) had intraoperative GDFT guided by the EDM while the NICOM was connected, and patients in phase II (n = 50) had intraoperative GDFT guided by the NICOM while the EDM was connected. Each patient's stroke volume was optimized using 250- mL colloid boluses. Agreement between the monitors was assessed, and patient outcomes (postoperative pain, nausea, and return of bowel function), complications (renal, pulmonary, infectious, and wound complications), and length of hospital stay (LOS) were compared. RESULTS: Using a 10% increase in stroke volume after fluid challenge, agreement between monitors was 60% at 5 minutes, 61% at 10 minutes, and 66% at 15 minutes, with no significant systematic disagreement (McNemar P > 0.05) at any time point. The EDM had significantly more missing data than the NICOM. No clinically significant differences were found in total LOS or other outcomes. The mean LOS was 6.56 ± 4.32 days in phase I and 6.07 ± 2.85 days in phase II, and 95% confidence limits for the difference were -0.96 to +1.95 days (P = 0.5016). CONCLUSIONS: The NICOM performs similarly to the EDM in guiding GDFT, with no clinically significant differences in outcomes, and offers increased ease of use as well as fewer missing data points. The NICOM may be a viable alternative monitor to guide GDFT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Regulatory focus theory (RFT) proposes two different social-cognitive motivational systems for goal pursuit: a promotion system, which is organized around strategic approach behaviors and "making good things happen," and a prevention system, which is organized around strategic avoidance and "keeping bad things from happening." The promotion and prevention systems have been extensively studied in behavioral paradigms, and RFT posits that prolonged perceived failure to make progress in pursuing promotion or prevention goals can lead to ineffective goal pursuit and chronic distress (Higgins, 1997).

Research has begun to focus on uncovering the neural correlates of the promotion and prevention systems in an attempt to differentiate them at the neurobiological level. Preliminary research suggests that the promotion and prevention systems have both distinct and overlapping neural correlates (Eddington, Dolcos, Cabeza, Krishnan, & Strauman, 2007; Strauman et al., 2013). However, little research has examined how individual differences in regulatory focus develop and manifest. The development of individual differences in regulatory focus is particularly salient during adolescence, a crucial topic to explore given the dramatic neurodevelopmental and psychosocial changes that take place during this time, especially with regard to self-regulatory abilities. A number of questions remain unexplored, including the potential for goal-related neural activation to be modulated by (a) perceived proximity to goal attainment, (b) individual differences in regulatory orientation, specifically general beliefs about one's success or failure in attaining the two kinds of goals, (c) age, with a particular focus on adolescence, and (d) homozygosity for the Met allele of the catechol-O-methyltransferase (COMT) Val158Met polymorphism, a naturally occurring genotype which has been shown to impact prefrontal cortex activation patterns associated with goal pursuit behaviors.

This study explored the neural correlates of the promotion and prevention systems through the use of a priming paradigm involving rapid, brief, masked presentation of individually selected promotion and prevention goals to each participant while being scanned. The goals used as priming stimuli varied with regard to whether participants reported that they were close to or far away from achieving them (i.e. a "match" versus a "mismatch" representing perceived success or failure in personal goal pursuit). The study also assessed participants' overall beliefs regarding their relative success or failure in attaining promotion and prevention goals, and all participants were genotyped for the COMT Val158Met polymorphism.

A number of significant findings emerged. Both promotion and prevention priming were associated with activation in regions associated with self-referential cognition, including the left medial prefrontal cortex, cuneus, and lingual gyrus. Promotion and prevention priming were also associated with distinct patterns of neural activation; specifically, left middle temporal gyrus activation was found to be significantly greater during prevention priming. Activation in response to promotion and prevention goals was found to be modulated by self-reports of both perceived proximity to goal achievement and goal orientation. Age also had a significant effect on activation, such that activation in response to goal priming became more robust in the prefrontal cortex and in default mode network regions as a function of increasing age. Finally, COMT genotype also modulated the neural response to goal priming both alone and through interactions with regulatory focus and age. Overall, these findings provide further clarification of the neural underpinnings of the promotion and prevention systems as well as provide information about the role of development and individual differences at the personality and genetic level on activity in these neural systems.