891 resultados para Cause
Resumo:
ROLE OF LOW AFFINITY β1-ADRENERGIC RECEPTOR IN NORMAL AND DISEASED HEARTS Background: The β1-adrenergic receptor (AR) has at least two binding sites, 1HAR and 1LAR (high and low affinity site of the 1AR respectively) which cause cardiostimulation. Some β-blockers, for example (-)-pindolol and (-)-CGP 12177 can activate β1LAR at higher concentrations than those required to block β1HAR. While β1HAR can be blocked by all clinically used β-blockers, β1LAR is relatively resistant to blockade. Thus, chronic β1LAR activation may occur in the setting of β-blocker therapy, thereby mediating persistent βAR signaling. Thus, it is important to determine the potential significance of β1LAR in vivo, particularly in disease settings. Method and result: C57Bl/6 male mice were used. Chronic (4 weeks) β1LAR activation was achieved by treatment with (-)-CGP12177 via osmotic minipump. Cardiac function was assessed by echocardiography and catheterization. (-)-CGP12177 treatment in healthy mice increased heart rate and left ventricular (LV) contractility without detectable LV remodelling or hypertrophy. In mice subjected to an 8-week period of aorta banding, (-)-CGP12177 treatment given during 4-8 weeks led to a positive inotropic effect. (-)-CGP12177 treatment exacerbated LV remodelling indicated by a worsening of LV hypertrophy by ??% (estimated by weight, wall thickness, cardiomyocyte size) and interstitial/perivascular fibrosis (by histology). Importantly, (-)-CGP12177 treatment to aorta banded mice exacerbated cardiac expression of hypertrophic, fibrogenic and inflammatory genes (all p<0.05 vs. non-treated control with aorta banding).. Conclusion: β1LAR activation provides functional support to the heart, in both normal and diseased (pressure overload) settings. Sustained β1LAR activation in the diseased heart exacerbates LV remodelling and therefore may promote disease progression from compensatory hypertrophy to heart failure. Word count: 270
Resumo:
Background Malnutrition is common among dialysis patients and is associated with an adverse outcome. One cause of this is a persistent reduction in nutrient intake, suggesting an abnormality of appetite regulation. Methods We used a novel technique to describe the appetite profile in 46 haemodialysis (HD) patients and 40 healthy controls. The Electronic Appetite Rating System (EARS) employs a palmtop computer to collect hourly ratings of motivation to eat and mood. We collected data on hunger, desire to eat, fullness, and tiredness. HD subjects were monitored on the dialysis day and the interdialytic day. Controls were monitored for 1 or 2 days. Results Temporal profiles of motivation to eat for the controls were similar on both days. Temporal profiles of motivation to eat for the HD group were lower on the dialysis day. Mean HD scores were not significantly different from controls. Dietary records indicated that dialysis patients consumed less food than controls. Conclusions Our data indicate that the EARS can be used to monitor subjective appetite states continuously in a group of HD patients. A HD session reduces hunger and desire to eat. Patients feel more tired after dialysis. This does not correlate with their hunger score, but does correlate with their fullness rating. Nutrient intake is reduced, suggesting a resetting of appetite control for the HD group. The EARS may be useful for intervention studies.
Resumo:
Objective: The evidence was reviewed on how physical activity could influence the regulation of food intake by either adjusting the sensitivity of appetite control mechanisms or by generating an energy deficit that could adjust the drive to eat. Design: Interventionist and correlational studies that had a significant influence on the relationship between physical activity and food intake were reviewed. Interventionist studies involve a deliberate imposition of physical activity with subsequent monitoring of the eating response. Correlational studies make use of naturally occurring differences in the levels of physical activity (between and within subjects) with simultaneous assessment of energy expenditure and intake. Subjects: Studies using lean, overweight, and obese men and women were included. Results: Only 19% of interventionist studies report an increase in energy intake after exercise; 65% show no change and 16% show a decrease in appetite. Of the correlational studies, approximately half show no relationship between energy expenditure and intake. These data indicate a rather loose coupling between energy expenditure and intake. A common sense view is that exercise is futile as a form of weight control because the energy deficit drives a compensatory increase in food intake. However, evidence shows that this is not generally true. One positive aspect of this is that raising energy expenditure through physical activity (or maintaining an active life style) can cause weight loss or prevent weight gain. A negative feature is that when people become sedentary after a period of high activity, food intake is not “down-regulated” to balance a reduced energy expenditure. Conclusion: Evidence suggests that a high level of physical activity can aid weight control either by improving the matching of food intake to energy expenditure (regulation) or by raising expenditure so that it is difficult for people to eat themselves into a positive energy balance.
Resumo:
Aims To identify self-care activities undertaken and determine relationships between self-efficacy, depression, quality of life, social support and adherence to compression therapy in a sample of patients with chronic venous insufficiency. Background Up to 70% of venous leg ulcers recur after healing. Compression hosiery is a primary strategy to prevent recurrence, however, problems with adherence to this strategy are well documented and an improved understanding of how psychosocial factors influence patients with chronic venous insufficiency will help guide effective preventive strategies. Design Cross-sectional survey and retrospective medical record review. Method All patients previously diagnosed with a venous leg ulcer which healed between 12–36 months prior to the study were invited to participate. Data on health, psychosocial variables and self-care activities were obtained from a self-report survey and data on medical and previous ulcer history were obtained from medical records. Multiple linear regression modelling was used to determine the independent influences of psychosocial factors on adherence to compression therapy. Results In a sample of 122 participants, the most frequently identified self-care activities were application of topical skin treatments, wearing compression hosiery and covering legs to prevent trauma. Compression hosiery was worn for a median of 4 days/week (range 0–7). After adjustment for all variables and potential confounders in a multivariable regression model, wearing compression hosiery was found to be significantly positively associated with participants’ knowledge of the cause of their condition (p=0.002), higher self-efficacy scores (p=0.026) and lower depression scores (p=0.009). Conclusion In this sample, depression, self-efficacy and knowledge were found to be significantly related to adherence to compression therapy. Relevance to clinical practice These findings support the need to screen for and treat depression in this population. In addition, strategies to improve patient knowledge and self-efficacy may positively influence adherence to compression therapy.
Resumo:
Exercise is known to cause physiological changes that could affect the impact of nutrients on appetite control. This study was designed to assess the effect of drinks containing either sucrose or high-intensity sweeteners on food intake following exercise. Using a repeated-measures design, three drink conditions were employed: plain water (W), a low-energy drink sweetened with artificial sweeteners aspartame and acesulfame-K (L), and a high-energy, sucrose-sweetened drink (H). Following a period of challenging exercise (70% VO2 max for 50 min), subjects consumed freely from a particular drink before being offered a test meal at which energy and nutrient intakes were measured. The degree of pleasantness (palatability) of the drinks was also measured before and after exercise. At the test meal, energy intake following the artificially sweetened (L) drink was significantly greater than after water and the sucrose (H) drinks (p < 0.05). Compared with the artificially sweetened (L) drink, the high-energy (H) drink suppressed intake by approximately the energy contained in the drink itself. However, there was no difference between the water (W) and the sucrose (H) drink on test meal energy intake. When the net effects were compared (i.e., drink + test meal energy intake), total energy intake was significantly lower after the water (W) drink compared with the two sweet (L and H) drinks. The exercise period brought about changes in the perceived pleasantness of the water, but had no effect on either of the sweet drinks. The remarkably precise energy compensation demonstrated after the higher energy sucrose drink suggests that exercise may prime the system to respond sensitively to nutritional manipulations. The results may also have implications for the effect on short-term appetite control of different types of drinks used to quench thirst during and after exercise.
Resumo:
The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.