812 resultados para non st segment elevation acute coronary syndrome
Resumo:
The efficacy of exercise to promote weight loss could potentially be undermined by its influence on explicit or implicit processes of liking and wanting for food which in turn alter food preference. The present study was designed to examine hedonic and homeostatic mechanisms involved in the acute effects of exercise on food intake. 24 healthy female subjects were recruited to take part in two counterbalanced activity sessions; 50 min of high intensity (70% max heart rate) exercise (Ex) or no exercise (NEx). Subjective appetite sensations, explicit and implicit hedonic processes, food preference and energy intake (EI) were measured immediately before and after each activity session and an ad libitum test meal. Two groups of subjects were identified in which exercise exerted different effects on compensatory EI and food preference. After exercise, compensators (C) increased their EI, rated the food to be more palatable, and demonstrated increased implicit wanting. Compensators also showed a preference for high-fat sweet food compared with non-compensators (NC), independent of the exercise intervention. Exercise-induced changes in the hedonic response to food could be an important consideration in the efficacy of using exercise as a means to lose weight. An enhanced implicit wanting for food after exercise may help to explain why some people overcompensate during acute eating episodes. Some individuals could be resistant to the beneficial effects of exercise due to a predisposition to compensate for exercise-induced energy expenditure as a result of implicit changes in food preferences.
Resumo:
This paper reports on a comparative study of students and non-students that investigates which psycho-social factors influence intended donation behaviour within a single organisation that offers multiple forms of donation activity. Additionally, the study examines which media channels are more important to encourage donation. A self-administered survey instrumentwas used and a sample of 776 respondents recruited. Logistic regressions and a Chow test were used to determine statistically significant differences between the groups. For donatingmoney, importance of charity and attitude towards charity influence students, whereas only importance of need significantly influences non-students. For donating time, no significant influences were found for non-students, however, importance of charity and attitude towards charity were significant for students. Importance of need was significant for both students and non-students for donating goods, with importance of charity also significant for students. Telephone and television channels were important for both groups. However, Internet, email and short messaging services were more important for students, providing opportunities to enhance this group’s perceptions of the importance of the charity, and the importance of the need, which ultimately impacts on their attitudes towards the charity. These differences highlight the importance of charities focussing on those motivations and attitudes that are important to a particular target segment and communicating through appropriate media channels for these segments.
Resumo:
In this age of evidence-based practice, nurses are increasingly expected to use research evidence in a systematic and judicious way when making decisions about patient care practices. Clinicians recognise the role of research when it provides valid, realistic answers in practical situations. Nonetheless, research is still perceived by some nurses as external to practice and implementing research findings into practice is often difficult. Since its conceptual platform in the 1960s, the emergence and growth of Nursing Development Units, and later, Practice Development Units has been described in the literature as strategic, organisational vehicles for changing the way nurses think about nursing by promoting and supporting a culture of inquiry and research-based practice. Thus, some scholars argue that practice development is situated in the gap between research and practice. Since the 1990s, the discourse has shifted from the structure and outcomes of developing practice to the process of developing practice, using a Practice Development methodology; underpinned by critical social science theory, as a vehicle for changing the culture and context of care. The nursing and practice development literature is dominated by descriptive reports of local practice development activity, typically focusing on reflection on processes or outcomes of processes, and describing perceived benefits. However, despite the volume of published literature, there is little published empirical research in the Australian or international context on the effectiveness of Practice Development as a methodology for changing the culture and context of care - leaving a gap in the literature. The aim of this study was to develop, implement and evaluate the effectiveness of a Practice Development model for clinical practice review and change on changing the culture and context of care for nurses working in an acute care setting. A longitudinal, pre-test/post-test, non-equivalent control group design was used to answer the following research questions: 1. Is there a relationship between nurses' perceptions of the culture and context of care and nurses' perceptions of research and evidence-based practice? 2. Is there a relationship between engagement in a facilitated process of Practice Development and change in nurses' perceptions of the culture and context of care? 3. Is there a relationship between engagement in a facilitated process of Practice Development and change in nurses' perceptions of research and evidence-based practice? Through a critical analysis of the literature and synthesis of the findings of past evaluations of Nursing and Practice Development structures and processes, this research has identified key attributes consistent throughout the chronological and theoretical development of Nursing and Practice Development that exemplify a culture and context of care that is conducive to creating a culture of inquiry and evidence-based practice. The study findings were then used in the development, validation and testing of an instrument to measure change in the culture and context of care. Furthermore, this research has also provided empirical evidence of the relationship of the key attributes to each other and to barriers to research and evidence-based practice. The research also provides empirical evidence regarding the effectiveness of a Practice Development methodology in changing the culture and context of care. This research is noteworthy in its contribution to advancing the discipline of nursing by providing evidence of the degree to which attributes of the culture and context of care, namely autonomy and control, workplace empowerment and constructive team dynamics, can be connected to engagement with research and evidence-based practice.
Resumo:
This naturalistic study investigated the mechanisms of change in measures of negative thinking and in 24-h urinary metabolites of noradrenaline (norepinephrine), dopamine and serotonin in a sample of 43 depressed hospital patients attending an eight-session group cognitive behavior therapy program. Most participants (91%) were taking antidepressant medication throughout the therapy period according to their treating Psychiatrists' prescriptions. The sample was divided into outcome categories (19 Responders and 24 Non-responders) on the basis of a clinically reliable change index [Jacobson, N.S., & Truax, P., 1991. Clinical significance: a statistical approach to defining meaningful change in psychotherapy research. Journal of Consulting and Clinical Psychology, 59, 12–19.] applied to the Beck Depression Inventory scores at the end of the therapy. Results of repeated measures analysis of variance [ANOVA] analyses of variance indicated that all measures of negative thinking improved significantly during therapy, and significantly more so in the Responders as expected. The treatment had a significant impact on urinary adrenaline and metadrenaline excretion however, these changes occurred in both Responders and Non-responders. Acute treatment did not significantly influence the six other monoamine metabolites. In summary, changes in urinary monoamine levels during combined treatment for depression were not associated with self-reported changes in mood symptoms.
Resumo:
The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.
Resumo:
Summary There are four interactions to consider between energy intake (EI) and energy expenditure (EE) in the development and treatment of obesity. (1) Does sedentariness alter levels of EI or subsequent EE? and (2) Do high levels of EI alter physical activity or exercise? (3) Do exercise-induced increases in EE drive EI upwards and undermine dietary approaches to weight management and (4) Do low levels of EI elevate or decrease EE? There is little evidence that sedentariness alters levels of EI. This lack of cross-talk between altered EE and EI appears to promote a positive EB. Lifestyle studies also suggest that a sedentary routine actually offers the opportunity for over-consumption. Substantive changes in non exercise activity thermogenesis are feasible, but not clearly demonstrated. Cross talk between elevated EE and EI is initially too weak and takes too long to activate, to seriously threaten dietary approaches to weight management. It appears that substantial fat loss is possible before intake begins to track a sustained elevation of EE. There is more evidence that low levels of EI does lower physical activity levels, in relatively lean men under conditions of acute or prolonged semi-starvation and in dieting obese subjects. During altered EB there are a number of small but significant changes in the components of EE, including (i) sleeping and basal metabolic rate, (ii) energy cost of weight change alters as weight is gained or lost, (iii) exercise efficiency, (iv) energy cost of weight bearing activities, (v) during substantive overfeeding diet composition (fat versus carbohydrate) will influence the energy cost of nutrient storage by ~ 15%. The responses (i-v) above are all “obligatory” responses. Altered EB can also stimulate facultative behavioural responses, as a consequence of cross-talk between EI and EE. Altered EB will lead to changes in the mode duration and intensity of physical activities. Feeding behaviour can also change. The degree of inter-individual variability in these responses will define the scope within which various mechanisms of EB compensation can operate. The relative importance of “obligatory” versus facultative, behavioural responses -as components of EB control- need to be defined.
Resumo:
The burden of rising health care expenditures has created a demand for information regarding the clinical and economic outcomes associated with complementary and alternative medicines. Meta-analyses of randomized controlled trials have found Hypericum perforatum preparations to be superior to placebo and similarly effective as standard antidepressants in the acute treatment of mild to moderate depression. A clear advantage over antidepressants has been demonstrated in terms of the reduced frequency of adverse effects and lower treatment withdrawal rates, low rates of side effects and good compliance, key variables affecting the cost-effectiveness of a given form of therapy. The most important risk associated with use is potential interactions with other drugs, but this may be mitigated by using extracts with low hyperforin content. As the indirect costs of depression are greater than five times direct treatment costs, given the rising cost of pharmaceutical antidepressants, the comparatively low cost of Hypericum perforatum extract makes it worthy of consideration in the economic evaluation of mild to moderate depression treatments.
Resumo:
Heart damage caused by acute myocardial infarction (AMI) is a leading cause of death and disability in Australia. Novel therapies are still required for the treatment of this condition due to the poor reparative ability of the heart. As such, cellular therapies that assist in the recovery of heart muscle are of great current interest. Culture expanded mesenchymal stem cells (MSC) represent a stem and progenitor cell population that has been shown to promote tissue recovery in pre-clinical studies of AMI. For MSC-based therapies in the clinic, an intravenous route of administration would ideally be used due to the low cost, ease of delivery and relative safety. The study of MSC migration is therefore clinically relevant for a minimally invasive cell therapy to promote regeneration of damaged tissue. C57BL/6, UBI-GFP-BL/6 and CD44-/-/GFP+/+ mice were utilised to investigate mMSC migration. To assist in murine models of MSC migration, a novel method was used for the isolation of murine MSC (mMSC). These mMSC were then expanded in culture and putative mMSC were positive for Sca-1, CD90.2, and CD44 and were negative for CD45 and CD11b. Furthermore, mMSC from C57BL/6 and UBI-GFP-BL/6 mice were shown to differentiate into cells of the mesodermal lineage. Cells from CD44-/-/GFP+/+ mice were positive for Sca-1 and CD90.2, and negative for CD44, CD45 and CD11b however, these cells were unable to differentiate into adipocytes and chondrocytes and express lineage specific genes, PLIN and ACAN. Analysis of mMSC chemokine receptor (CR) expression showed that although mMSC do express chemokine receptors, (including those specific for chemokines released after AMI), these were low or undetectable by mRNA. However, protein expression could be detected, which was predominantly cytoplasmic. It was further shown that in both healthy (unperturbed) and inflamed tissues, mMSC had very little specific migration and engraftment after intravenous injection. To determine if poor mMSC migration was due to the inability of mMSC to respond to chemotactic stimuli, chemokine expression in bone marrow, skin injury and hearts (healthy and after AMI) was analysed at various time points by quantitative real-time PCR (qRT PCR). Many chemokines were up-regulated after skin biopsy and AMI, but the highest acute levels were found for CXCL12 and CCL7. Due to their high expression in infarcted hearts, the chemokines CXCL12 and CCL7 were tested for their effect on mMSC migration. Despite CR expression at both protein and mRNA levels, migration in response to CXCL12 and CCL7 was low in mMSC cultured on Nunclon plastic. A novel tissue culture plastic technology (UpCellTM) was then used that allowed gentle non-enzymatic dissociation of mMSC, thus preserving surface expression of the CRs. Despite this the in vitro data indicated that CXCL12 fails to induce significant migration ability of mMSC, while CCL7 induces significant, but low-level migration. We speculated this may be because of low levels of surface expression of chemokine receptors. In a strategy to increase cell surface expression of mMSC chemokine receptors and enhance their in vitro and in vivo migration capacity, mMSC were pre-treated with pro-inflammatory cytokines. Increased levels of both mRNA and surface protein expression were found for CRs by pre-treating mMSC with pro-inflammatory cytokines including TNF-á, IFN-ã, IL-1á and IL-6. Furthermore, the chemotactic response of mMSC to CXCL12 and CCL7 was significantly higher with these pretreated cells. Finally, the effectiveness of this type of cell manipulation was demonstrated in vivo, where mMSC pre-treated with TNF-á and IFN-ã showed significantly increased migration in skin injury and AMI models. Therefore this thesis has demonstrated, using in vitro and in vivo models, the potential for prior manipulation of MSC as a possible means for increasing the utility of intravenously delivery for MSC-based cellular therapies.
Resumo:
The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.
Resumo:
Physical inactivity is a leading factor associated with cardiovascular disease and a major contributor to the global burden of disease in developed countries. Subjective mood states associated with acute exercise are likely to influence future exercise adherence and warrant further investigation. The present study examined the effects of a single bout of vigorous exercise on mood and anxiety between individuals with substantially different exercise participation histories. Mood and anxiety were assessed one day before an exercise test (baseline), 5 minutes before (pre-test) and again 10 and 25 minutes post-exercise. Participants were 31 university students (16 males, 15 females; Age M = 20), with 16 participants reporting a history of regular exercise with the remaining 15 reporting to not exercise regularly. Each participant completed an incremental exercise test on a Monark cycle ergometer to volitional exhaustion. Regular exercisers reported significant post-exercise improvements in mood and reductions in state anxiety. By contrast, non-regular exercisers reported an initial decline in post-exercise mood and increased anxiety, followed by an improvement in mood and reduction in anxiety back to pre-exercise levels. Our findings suggest that previous exercise participation mediates affective responses to acute bouts of vigorous exercise. We suggest that to maximise positive mood changes following exercise, practitioners should carefully consider the individual’s exercise participation history before prescribing new regimes.
Resumo:
Older adults, especially those acutely ill, are vulnerable to developing malnutrition due to a range of risk factors. The high prevalence and extensive consequences of malnutrition in hospitalised older adults have been reported extensively. However, there are few well-designed longitudinal studies that report the independent relationship between malnutrition and clinical outcomes after adjustment for a wide range of covariates. Acutely ill older adults are exceptionally prone to nutritional decline during hospitalisation, but few reports have studied this change and impact on clinical outcomes. In the rapidly ageing Singapore population, all this evidence is lacking, and the characteristics associated with the risk of malnutrition are also not well-documented. Despite the evidence on malnutrition prevalence, it is often under-recognised and under-treated. It is therefore crucial that validated nutrition screening and assessment tools are used for early identification of malnutrition. Although many nutrition screening and assessment tools are available, there is no universally accepted method for defining malnutrition risk and nutritional status. Most existing tools have been validated amongst Caucasians using various approaches, but they are rarely reported in the Asian elderly and none has been validated in Singapore. Due to the multiethnicity, cultural, and language differences in Singapore older adults, the results from non-Asian validation studies may not be applicable. Therefore it is important to identify validated population and setting specific nutrition screening and assessment methods to accurately detect and diagnose malnutrition in Singapore. The aims of this study are therefore to: i) characterise hospitalised elderly in a Singapore acute hospital; ii) describe the extent and impact of admission malnutrition; iii) identify and evaluate suitable methods for nutritional screening and assessment; and iv) examine changes in nutritional status during admission and their impact on clinical outcomes. A total of 281 participants, with a mean (+SD) age of 81.3 (+7.6) years, were recruited from three geriatric wards in Tan Tock Seng Hospital over a period of eight months. They were predominantly Chinese (83%) and community-dwellers (97%). They were screened within 72 hours of admission by a single dietetic technician using four nutrition screening tools [Tan Tock Seng Hospital Nutrition Screening Tool (TTSH NST), Nutritional Risk Screening 2002 (NRS 2002), Mini Nutritional Assessment-Short Form (MNA-SF), and Short Nutritional Assessment Questionnaire (SNAQ©)] that were administered in no particular order. The total scores were not computed during the screening process so that the dietetic technician was blinded to the results of all the tools. Nutritional status was assessed by a single dietitian, who was blinded to the screening results, using four malnutrition assessment methods [Subjective Global Assessment (SGA), Mini Nutritional Assessment (MNA), body mass index (BMI), and corrected arm muscle area (CAMA)]. The SGA rating was completed prior to computation of the total MNA score to minimise bias. Participants were reassessed for weight, arm anthropometry (mid-arm circumference, triceps skinfold thickness), and SGA rating at discharge from the ward. The nutritional assessment tools and indices were validated against clinical outcomes (length of stay (LOS) >11days, discharge to higher level care, 3-month readmission, 6-month mortality, and 6-month Modified Barthel Index) using multivariate logistic regression. The covariates included age, gender, race, dementia (defined using DSM IV criteria), depression (defined using a single question “Do you often feel sad or depressed?”), severity of illness (defined using a modified version of the Severity of Illness Index), comorbidities (defined using Charlson Comorbidity Index, number of prescribed drugs and admission functional status (measured using Modified Barthel Index; MBI). The nutrition screening tools were validated against the SGA, which was found to be the most appropriate nutritional assessment tool from this study (refer section 5.6) Prevalence of malnutrition on admission was 35% (defined by SGA), and it was significantly associated with characteristics such as swallowing impairment (malnourished vs well-nourished: 20% vs 5%), poor appetite (77% vs 24%), dementia (44% vs 28%), depression (34% vs 22%), and poor functional status (MBI 48.3+29.8 vs 65.1+25.4). The SGA had the highest completion rate (100%) and was predictive of the highest number of clinical outcomes: LOS >11days (OR 2.11, 95% CI [1.17- 3.83]), 3-month readmission (OR 1.90, 95% CI [1.05-3.42]) and 6-month mortality (OR 3.04, 95% CI [1.28-7.18]), independent of a comprehensive range of covariates including functional status, disease severity and cognitive function. SGA is therefore the most appropriate nutritional assessment tool for defining malnutrition. The TTSH NST was identified as the most suitable nutritional screening tool with the best diagnostic performance against the SGA (AUC 0.865, sensitivity 84%, specificity 79%). Overall, 44% of participants experienced weight loss during hospitalisation, and 27% had weight loss >1% per week over median LOS 9 days (range 2-50). Wellnourished (45%) and malnourished (43%) participants were equally prone to experiencing decline in nutritional status (defined by weight loss >1% per week). Those with reduced nutritional status were more likely to be discharged to higher level care (adjusted OR 2.46, 95% CI [1.27-4.70]). This study is the first to characterise malnourished hospitalised older adults in Singapore. It is also one of the very few studies to (a) evaluate the association of admission malnutrition with clinical outcomes in a multivariate model; (b) determine the change in their nutritional status during admission; and (c) evaluate the validity of nutritional screening and assessment tools amongst hospitalised older adults in an Asian population. Results clearly highlight that admission malnutrition and deterioration in nutritional status are prevalent and are associated with adverse clinical outcomes in hospitalised older adults. With older adults being vulnerable to risks and consequences of malnutrition, it is important that they are systematically screened so timely and appropriate intervention can be provided. The findings highlighted in this thesis provide an evidence base for, and confirm the validity of the current nutrition screening and assessment tools used among hospitalised older adults in Singapore. As the older adults may have developed malnutrition prior to hospital admission, or experienced clinically significant weight loss of >1% per week of hospitalisation, screening of the elderly should be initiated in the community and continuous nutritional monitoring should extend beyond hospitalisation.
Resumo:
Background There has been increasing interest in assessing the impacts of temperature on mortality. However, few studies have used a case–crossover design to examine non-linear and distributed lag effects of temperature on mortality. Additionally, little evidence is available on the temperature-mortality relationship in China, or what temperature measure is the best predictor of mortality. Objectives To use a distributed lag non-linear model (DLNM) as a part of case–crossover design. To examine the non-linear and distributed lag effects of temperature on mortality in Tianjin, China. To explore which temperature measure is the best predictor of mortality; Methods: The DLNM was applied to a case¬−crossover design to assess the non-linear and delayed effects of temperatures (maximum, mean and minimum) on deaths (non-accidental, cardiopulmonary, cardiovascular and respiratory). Results A U-shaped relationship was consistently found between temperature and mortality. Cold effects (significantly increased mortality associated with low temperatures) were delayed by 3 days, and persisted for 10 days. Hot effects (significantly increased mortality associated with high temperatures) were acute and lasted for three days, and were followed by mortality displacement for non-accidental, cardiopulmonary, and cardiovascular deaths. Mean temperature was a better predictor of mortality (based on model fit) than maximum or minimum temperature. Conclusions In Tianjin, extreme cold and hot temperatures increased the risk of mortality. Results suggest that the effects of cold last longer than the effects of heat. It is possible to combine the case−crossover design with DLNMs. This allows the case−crossover design to flexibly estimate the non-linear and delayed effects of temperature (or air pollution) whilst controlling for season.
Resumo:
Several studies have demonstrated an association between polycystic ovary syndrome (PCOS) and the dinucleotide repeat microsatellite marker D19S884, which is located in intron 55 of the fibrillin-3 (FBN3) gene. Fibrillins, including FBN1 and 2, interact with latent transforming growth factor (TGF)-β-binding proteins (LTBP) and thereby control the bioactivity of TGFβs. TGFβs stimulate fibroblast replication and collagen production. The PCOS ovarian phenotype includes increased stromal collagen and expansion of the ovarian cortex, features feasibly influenced by abnormal fibrillin expression. To examine a possible role of fibrillins in PCOS, particularly FBN3, we undertook tagging and functional single nucleotide polymorphism (SNP) analysis (32 SNPs including 10 that generate non-synonymous amino acid changes) using DNA from 173 PCOS patients and 194 controls. No SNP showed a significant association with PCOS and alleles of most SNPs showed almost identical population frequencies between PCOS and control subjects. No significant differences were observed for microsatellite D19S884. In human PCO stroma/cortex (n = 4) and non-PCO ovarian stroma (n = 9), follicles (n = 3) and corpora lutea (n = 3) and in human ovarian cancer cell lines (KGN, SKOV-3, OVCAR-3, OVCAR-5), FBN1 mRNA levels were approximately 100 times greater than FBN2 and 200–1000-fold greater than FBN3. Expression of LTBP-1 mRNA was 3-fold greater than LTBP-2. We conclude that FBN3 appears to have little involvement in PCOS but cannot rule out that other markers in the region of chromosome 19p13.2 are associated with PCOS or that FBN3 expression occurs in other organs and that this may be influencing the PCOS phenotype.
Resumo:
The CDKN2A gene encodes p16 (CDKN2A), a cell-cycle inhibitor protein which prevents inappropriate cell cycling and, hence, proliferation. Germ-line mutations in CDKN2A predispose to the familial atypical multiple-mole melanoma (FAMMM) syndrome but also have been seen in rare families in which only 1 or 2 individuals are affected by cutaneous malignant melanoma (CMM). We therefore sequenced exons 1alpha and 2 of CDKN2A using lymphocyte DNA isolated from index cases from 67 families with cancers at multiple sites, where the patterns of cancer did not resemble those attributable to known genes such as hMLH1, hMLH2, BRCA1, BRCA2, TP53 or other cancer susceptibility genes. We found one mutation, a mis-sense mutation resulting in a methionine to isoleucine change at codon 53 (M531) of exon 2. The individual tested had developed 2 CMMs but had no dysplastic nevi and lacked a family history of dysplastic nevi or CMM. Other family members had been diagnosed with oral cancer (2 persons), bladder cancer (1 person) and possibly gall-bladder cancer. While this mutation has been reported in Australian and North American melanoma kindreds, we did not observe it in 618 chromosomes from Scottish and Canadian controls. Functional studies revealed that the CDKN2A variant carrying the M531 change was unable to bind effectively to CDK4, showing that this mutation is of pathological significance. Our results have confirmed that CDKN2A mutations are not limited to FAMMM kindreds but also demonstrate that multi-site cancer families without melanoma are very unlikely to contain CDKN2A mutations.