767 resultados para cohort study
Resumo:
Aims – To develop local contemporary coefficients for the Trauma Injury Severity Score in New Zealand, TRISS(NZ), and to evaluate their performance at predicting survival against the original TRISS coefficients. Methods – Retrospective cohort study of adults who sustained a serious traumatic injury, and who survived until presentation at Auckland City, Middlemore, Waikato, or North Shore Hospitals between 2002 and 2006. Coefficients were estimated using ordinary and multilevel mixed-effects logistic regression models. Results – 1735 eligible patients were identified, 1672 (96%) injured from a blunt mechanism and 63 (4%) from a penetrating mechanism. For blunt mechanism trauma, 1250 (75%) were male and average age was 38 years (range: 15-94 years). TRISS information was available for 1565 patients of whom 204 (13%) died. Area under the Receiver Operating Characteristic (ROC) curves was 0.901 (95%CI: 0.879-0.923) for the TRISS(NZ) model and 0.890 (95% CI: 0.866-0.913) for TRISS (P<0.001). Insufficient data were available to determine coefficients for penetrating mechanism TRISS(NZ) models. Conclusions – Both TRISS models accurately predicted survival for blunt mechanism trauma. However, TRISS(NZ) coefficients were statistically superior to TRISS coefficients. A strong case exists for replacing TRISS coefficients in the New Zealand benchmarking software with these updated TRISS(NZ) estimates.
Resumo:
Background: The transition to school is a sensitive period for children in relation to school success. In the early school years, children need to develop positive attitudes to school and have experiences that promote academic, behavioural and social competence. When children begin school there are higher expectations of responsibility and independence and in the year one class, there are more explicit academic goals for literacy and numeracy and more formal instruction. Most importantly, children’s early attitudes to learning and learning styles have an impact on later educational outcomes. Method: Data were drawn from The Longitudinal Study of Australian Children (LSAC). LSAC is a cross-sequential cohort study funded by the Australian Government. In these analyses, Wave 2 (2006) data for 2499 children in the Kindergarten Cohort were used. Children, at Wave 2, were in the first year of formal school. They had a mean age of 6.9 years (SD= 0.26). Measures included a 6-item measure of Approaches to Learning (task persistence, independence) and the Academic Rating Scales for language and literacy and mathematical thinking. Teachers rated their relationships with children on the short form of the STRS. Results: Girls were rated by their teachers as doing better than boys on Language and literacy, Approaches to learning; and they had a better relationship with their teacher. Children from an Aboriginal or Torres Strait Island (ATSI) background were rated as doing less well on Language and Literacy and Mathematical thinking and on their Approaches to learning. Children from high Socio Economic Position families are doing better on teacher rated Language and Literacy, Mathematical thinking, Approaches to learning and they had a better relationship with their teacher. Conclusions: Findings highlight the importance of key demographic variables in understanding children’s early school success.
Resumo:
OBJECTIVES: To quantify the driving difficulties of older adults using a detailed assessment of driving performance and to link this with self-reported retrospective and prospective crashes. DESIGN: Prospective cohort study. SETTING: On-road driving assessment. PARTICIPANTS: Two hundred sixty-seven community-living adults aged 70 to 88 randomly recruited through the electoral roll. MEASUREMENTS: Performance on a standardized measure of driving performance. RESULTS: Lane positioning, approach, and blind spot monitoring were the most common error types, and errors occurred most frequently in situations involving merging and maneuvering. Drivers reporting more retrospective or prospective crashes made significantly more driving errors. Driver instructor interventions during self-navigation (where the instructor had to brake or take control of the steering to avoid an accident) were significantly associated with higher retrospective and prospective crashes; every instructor intervention almost doubled prospective crash risk. CONCLUSION: These findings suggest that on-road driving assessment provides useful information on older driver difficulties, with the self-directed component providing the most valuable information.
Resumo:
Purpose: Television viewing time, independent of leisure-time physical activity, has cross-sectional relationships with the metabolic syndrome and its individual components. We examined whether baseline and five-year changes in self-reported television viewing time are associated with changes in continuous biomarkers of cardio-metabolic risk (waist circumference, triglycerides, high density lipoprotein cholesterol, systolic and diastolic blood pressure, fasting plasma glucose; and a clustered cardio-metabolic risk score) in Australian adults. Methods: AusDiab is a prospective, population-based cohort study with biological, behavioral, and demographic measures collected in 1999–2000 and 2004–2005. Non-institutionalized adults aged ≥ 25 years were measured at baseline (11,247; 55% of those completing an initial household interview); 6,400 took part in the five-year follow-up biomedical examination, and 3,846 met the inclusion criteria for this analysis. Multiple linear regression analysis was used and unstandardized B coefficients (95% CI) are provided. Results: Baseline television viewing time (10 hours/week unit) was not significantly associated with change in any of the biomarkers of cardio-metabolic risk. Increases in television viewing time over five years (10 hours/week unit) were associated with increases in: waist circumference (cm) (men: 0.43 (0.08, 0.78), P = 0.02; women: 0.68 (0.30, 1.05), P <0.001), diastolic blood pressure (mmHg) (women: 0.47 (0.02, 0.92), P = 0.04), and the clustered cardio-metabolic risk score (women: 0.03 (0.01, 0.05), P = 0.007). These associations were independent of baseline television viewing time and baseline and change in physical activity and other potential confounders. Conclusion: These findings indicate that an increase in television viewing time is associated with adverse cardio-metabolic biomarker changes. Further prospective studies using objective measures of several sedentary behaviors are required to confirm causality of the associations found.
Resumo:
In a critical review of the literature to assess the efficacy of monotherapy and subsequent combinant anticonvulsant therapy in the treatment of neonatal seizures, four studies were examined; three randomised control trials and one retrospective cohort study. Each study used phenobarbital for monotherapy with doses reaching a maximum of 40mg/kg. Anticonvulsant drugs used in conjunction with phenobarbitone for combinant therapy included midazolam, clonazepam, lorazepam, phenytoin and lignocaine. Each study used an electroencephalograph for seizure diagnosis and neonatal monitoring when determining therapy efficacy and final outcome assessments. Collectively the studies suggest neither monotherapy nor combinant therapy are entirely effective in seizure control. Monotherapy demonstrated a 29% - 50% success rate for complete seizure control whereas combinant therapy administered after the failure of monotherapy demonstrated a success rate of 43% - 100%. When these trials were combined the overall success for monotherapy was 44% (n = 34/78) and for combinant therapy 72% ( n = 56/78). Though the evidence was inconclusive, it would appear that combinant therapy is of greater benefit to infants unresponsive to monotherapy. Further research such as multi-site randomised controlled trials using standardised criteria and data collection are required within this specialised area.
Resumo:
Objectives: To assess the validity of the Waterlow screening tool in a cohort of internal medicine patients and to identify factors contributing to pressure injury. Design: Longitudinal cohort study Setting: A tertiary hospital in Brisbane, Australia Participants: 274 patients admitted through the Emergency Department or outpatient clinics and expected to remain in hospital for at least three days were included in the study. The mean age was 65.3 years. Interventions: Patients were screened on admission using the Waterlow screening tool. Every second day, their pressure ulcer status was monitored and recorded. Main outcome measures: Pressure ulcer incidence Results: Fifteen participants (5.5%) had an existing pressure ulcer and a further 12 (4.4%) developed a pressure ulcer during their hospital stay. Sensitivity of the Waterlow scale was 0.67, (95% CI: 0.35 to 0.88); specificity 0.79, (95% CI: 0.73 to 0.85); PPV 0.13, (95% CI: 0.07 to 0.24); NPV 0.98, (95% CI: 0.94 to 0.99). Conclusion: This study provides further evidence of the poor predictive validity of the Waterlow scale. A suitably powered randomised controlled trial is urgently needed to provide definitive evidence about the usefulness of the Waterlow scale compared with other screening tools and with clinical judgement.
Resumo:
BACKGROUND: The relationship between cigarette smoking and cardiovascular disease is well established, yet the underlying mechanisms remain unclear. Although smokers have a more atherogenic lipid profile, this may be mediated by other lifestyle-related factors. Analysis of lipoprotein subclasses by the use of nuclear magnetic resonance spectroscopy (NMR) may improve characterisation of lipoprotein abnormalities. OBJECTIVE: We used NMR spectroscopy to investigate the relationships between smoking status, lifestyle-related risk factors, and lipoproteins in a contemporary cohort. METHODS: A total of 612 participants (360 women) aged 40–69 years at baseline (199021994) enrolled in the Melbourne Collaborative Cohort Study had plasma lipoproteins measured with NMR. Data were analysed separately by sex. RESULTS: After adjusting for lifestyle-related risk factors, including alcohol and dietary intake, physical activity, and weight, mean total low-density lipoprotein (LDL) particle concentration was greater for female smokers than nonsmokers. Both medium- and small-LDL particle concentrations contributed to this difference. Total high-density lipoprotein (HDL) and large-HDL particle concentrations were lower for female smokers than nonsmokers. The proportion with low HDL particle number was greater for female smokers than nonsmokers. For men, there were few smoking-related differences in lipoprotein measures. CONCLUSION: Female smokers have a more atherogenic lipoprotein profile than nonsmokers. This difference is independent of other lifestyle-related risk factors. Lipoprotein profiles did not differ greatly between male smokers and nonsmokers.
Resumo:
Goals: Few studies have repeatedly evaluated quality of life and potentially relevant factors in patients with benign primary brain tumor. The purpose of this study was to explore the relationship between the experience of the symptom distress, functional status, depression, and quality of life prior to surgery (T1) and 1 month post-discharge (T2). ---------- Patients and methods: This was a prospective cohort study including 58 patients with benign primary brain tumor in one teaching hospital in the Taipei area of Taiwan. The research instruments included the M.D. Anderson Symptom Inventory, the Functional Independence Measure scale, the Hospital Depression Scale, and the Functional Assessment of Cancer Therapy-Brain.---------- Results: Symptom distress (T1: r=−0.90, p<0.01; T2: r=−0.52, p<0.01), functional status (T1: r=0.56, p<0.01), and depression (T1: r=−0.71, p<0.01) demonstrated a significant relationship with patients' quality of life. Multivariate analysis identified symptom distress (explained 80.2%, Rinc 2=0.802, p=0.001) and depression (explained 5.2%, Rinc 2=0.052, p<0.001) continued to have a significant independent influence on quality of life prior to surgery (T1) after controlling for key demographic and medical variables. Furthermore, only symptom distress (explained 27.1%, Rinc 2=0.271, p=0.001) continued to have a significant independent influence on quality of life at 1 month after discharge (T2).---------- Conclusions: The study highlights the potential importance of a patient's symptom distress on quality of life prior to and following surgery. Health professionals should inquire about symptom distress over time. Specific interventions for symptoms may improve the symptom impact on quality of life. Additional studies should evaluate symptom distress on longer-term quality of life of patients with benign brain tumor.
Resumo:
Evidence that our food environment can affect meal size is often taken to indicate a failure of ‘conscious control’. By contrast, our research suggests that ‘expected satiation’ (fullness that a food is expected to confer) predicts self-selected meal size. However, the role of meal planning as a determinant of actual meal size remains unresolved, as does the extent to which meal planning is commonplace outside the laboratory. Here, we quantified meal planning and its relation to meal size in a large-cohort study. Participants (N= 764; 25.6 yrs, 78% female) completed a questionnaire containing items relating to their last meal. The majority (91%) of meals were consumed in their entirety. Furthermore, in 92% of these cases the participants decided to consume the whole meal, even before it began. A second major objective was to explore the prospect that meal plans are revised based on within-meal experience (e.g., development of satiation). Only 8% of participants reported ‘unexpected’ satiation that caused them to consume less than anticipated. Moreover, at the end of the meal 57% indicated that they were not fully satiated, and 29% continued eating beyond comfortable satiation (often to avoid wasting food). This pattern was neither moderated by BMI nor dieting status, and was observed across meal types. Together, these data indicate that meals are often planned and that planning corresponds closely with amount consumed. By contrast, we find limited evidence for within-meal modification of these plans, suggesting that ‘pre-meal cognition’ is an important determinant of meal size in humans.
Resumo:
Lately, there has been increasing interest in the association between temperature and adverse birth outcomes including preterm birth (PTB) and stillbirth. PTB is a major predictor of many diseases later in life, and stillbirth is a devastating event for parents and families. The aim of this study was to assess the seasonal pattern of adverse birth outcomes, and to examine possible associations of maternal exposure to temperature with PTB and stillbirth. We also aimed to identify if there were any periods of the pregnancy where exposure to temperature was particularly harmful. A retrospective cohort study design was used and we retrieved individual birth records from the Queensland Health Perinatal Data Collection Unit for all singleton births (excluding twins and triplets) delivered in Brisbane between 1 July 2005 and 30 June 2009. We obtained weather data (including hourly relative humidity, minimum and maximum temperature) and air-pollution data (including PM10, SO2 and O3) from the Queensland Department of Environment and Resource Management. We used survival analyses with the time-dependent variables of temperature, humidity and air pollution, and the competing risks of stillbirth and live birth. To assess the monthly pattern of the birth outcomes, we fitted month of pregnancy as a time-dependent variable. We examined the seasonal pattern of the birth outcomes and the relationship between exposure to high or low temperatures and birth outcomes over the four lag weeks before birth. We further stratified by categorisation of PTB: extreme PTB (< 28 weeks of gestation), PTB (28–36 weeks of gestation), and term birth (≥ 37 weeks of gestation). Lastly, we examined the effect of temperature variation in each week of the pregnancy on birth outcomes. There was a bimodal seasonal pattern in gestation length. After adjusting for temperature, the seasonal pattern changed from bimodal, to only one peak in winter. The risk of stillbirth was statistically significant lower in March compared with January. After adjusting for temperature, the March trough was still statistically significant and there was a peak in risk (not statistically significant) in winter. There was an acute effect of temperature on gestational age and stillbirth with a shortened gestation for increasing temperature from 15 °C to 25 °C over the last four weeks before birth. For stillbirth, we found an increasing risk with increasing temperatures from 12 °C to approximately 20 °C, and no change in risk at temperatures above 20 °C. Certain periods of the pregnancy were more vulnerable to temperature variation. The risk of PTB (28–36 weeks of gestation) increased as temperatures increased above 21 °C. For stillbirth, the fetus was most vulnerable at less than 28 weeks of gestation, but there were also effects in 28–36 weeks of gestation. For fetuses of more than 37 weeks of gestation, increasing temperatures did not increase the risk of stillbirth. We did not find any adverse affects of cold temperature on birth outcomes in this cohort. My findings contribute to knowledge of the relationship between temperature and birth outcomes. In the context of climate change, this is particularly important. The results may have implications for public health policy and planning, as they indicate that pregnant women would decrease their risk of adverse birth outcomes by avoiding exposure to high temperatures and seeking cool environments during hot days.
Resumo:
Background: Known risk factors for secondary lymphedema only partially explain who develops lymphedema following cancer, suggesting that inherited genetic susceptibility may influence risk. Moreover, identification of molecular signatures could facilitate lymphedema risk prediction prior to surgery or lead to effective drug therapies for prevention or treatment. Recent advances in the molecular biology underlying development of the lymphatic system and related congenital disorders implicate a number of potential candidate genes to explore in relation to secondary lymphedema. Methods and Results: We undertook a nested case-control study, with participants who had developed lymphedema after surgical intervention within the first 18 months of their breast cancer diagnosis serving as cases (n=22) and those without lymphedema serving as controls (n=98), identified from a prospective, population-based, cohort study in Queensland, Australia. TagSNPs that covered all known genetic variation in the genes SOX18, VEGFC, VEGFD, VEGFR2, VEGFR3, RORC, FOXC2, LYVE1, ADM and PROX1 were selected for genotyping. Multiple SNPs within three receptor genes, VEGFR2, VEGFR3 and RORC, were associated with lymphedema defined by statistical significance (p<0.05) or extreme risk estimates (OR<0.5 or >2.0). Conclusions: These provocative, albeit preliminary, findings regarding possible genetic predisposition to secondary lymphedema following breast cancer treatment warrant further attention for potential replication using larger datasets.
Resumo:
Introduction The suitability of video conferencing (VC) technology for clinical purposes relevant to geriatric medicine is still being established. This project aimed to determine the validity of the diagnosis of dementia via VC. Methods This was a multisite, noninferiority, prospective cohort study. Patients, aged 50 years and older, referred by their primary care physician for cognitive assessment, were assessed at 4 memory disorder clinics. All patients were assessed independently by 2 specialist physicians. They were allocated one face-to-face (FTF) assessment (Reference standard – usual clinical practice) and an additional assessment (either usual FTF assessment or a VC assessment) on the same day. Each specialist physician had access to the patient chart and the results of a battery of standardized cognitive assessments administered FTF by the clinic nurse. Percentage agreement (P0) and the weighted kappa statistic with linear weight (Kw) were used to assess inter-rater reliability across the 2 study groups on the diagnosis of dementia (cognition normal, impaired, or demented). Results The 205 patients were allocated to group: Videoconference (n = 100) or Standard practice (n = 105); 106 were men. The average age was 76 (SD 9, 51–95) and the average Standardized Mini-Mental State Examination Score was 23.9 (SD 4.7, 9–30). Agreement for the Videoconference group (P0= 0.71; Kw = 0.52; P < .0001) and agreement for the Standard Practice group (P0= 0.70; Kw = 0.50; P < .0001) were both statistically significant (P < .05). The summary kappa statistic of 0.51 (P = .84) indicated that VC was not inferior to FTF assessment. Conclusions Previous studies have shown that preliminary standardized assessment tools can be reliably administered and scored via VC. This study focused on the geriatric assessment component of the interview (interpretation of standardized assessments, taking a history and formulating a diagnosis by medical specialist) and identified high levels of agreement for diagnosing dementia. A model of service incorporating either local or remote administered standardized assessments, and remote specialist assessment, is a reliable process for enabling the diagnosis of dementia for isolated older adults.
Resumo:
Background & aims: The confounding effect of disease on the outcomes of malnutrition using diagnosis-related groups (DRG) has never been studied in a multidisciplinary setting. This study aims to determine the prevalence of malnutrition in a tertiary hospital in Singapore and its impact on hospitalization outcomes and costs, controlling for DRG. Methods: This prospective cohort study included a matched case control study. Subjective Global Assessment was used to assess the nutritional status on admission of 818 adults. Hospitalization outcomes over 3 years were adjusted for gender, age, ethnicity, and matched for DRG. Results: Malnourished patients (29%) had longer hospital stays (6.9 ± 7.3 days vs. 4.6 ± 5.6 days, p < 0.001) and were more likely to be readmitted within 15 days (adjusted relative risk = 1.9, 95%CI 1.1–3.2, p = 0.025). Within a DRG, the mean difference between actual cost of hospitalization and the average cost for malnourished patients was greater than well-nourished patients (p = 0.014). Mortality was higher in malnourished patients at 1 year (34% vs. 4.1 %), 2 years (42.6% vs. 6.7%) and 3 years (48.5% vs. 9.9%); p < 0.001 for all. Overall, malnutrition was a significant predictor of mortality (adjusted hazard ratio = 4.4, 95% CI 3.3-6.0, p < 0.001). Conclusions: Malnutrition was evident in up to one third of the inpatients and led to poor hospitalization outcomes and survival as well as increased costs of care, even after matching for DRG. Strategies to prevent and treat malnutrition in the hospital and post-discharge are needed.
Resumo:
Purpose: The aim of this study was to examine whether older people are prepared to engage in appropriate falls prevention strategies after discharge from hospital. Design and Methods We used a semi-structured interview to survey older patients about to be discharged from hospital and examined their knowledge regarding falls prevention strategies to utilize in the post-discharge period. The study was part of a prospective cohort study, nested within a larger, randomized controlled trial. Participants (n = 333) were asked to suggest strategies to reduce their falls risk at home after discharge, and their responses were compared with current reported research evidence for falls prevention interventions. Results Participants’ strategies (n = 629) were classified into 7 categories: behavioral, support while mobilizing, approach to movement, physical environment, visual, medical, and activities or exercise. Although exercise has been identified as an effective falls risk reduction strategy, only 2.9% of participants suggested engaging in exercises. Falls prevention was most often conceptualized by participants as requiring 1 (35.4%) or 2 (40.8%) strategies for avoiding an accidental event, rather than engaging in sustained multiple risk reduction behaviors. Implications Results demonstrate that older patients have low levels of knowledge about appropriate falls prevention strategies that could be used after discharge in spite of their increased falls risk during this period. Findings suggest that health care workers should design and deliver falls prevention education programs specifically targeted to older people who are to be discharged from hospital.
Resumo:
Objectives: Malnutrition is common in older hospitalised patients, and barriers to adequate intake in hospital limit the effectiveness of hospital-based nutrition interventions. This pilot study was undertaken to determine whether nutrition-focussed care at discharge and in the early post-hospital period is feasible and acceptable to patients and carers, and improves nutritional status. Design: Prospective cohort study Setting: Internal medicine wards of a tertiary teaching hospital in Brisbane, Australia Participants: Patients aged 65 and older admitted for at least 3 days, identified as malnourished or at risk of malnutrition using Mini Nutritional Assessment (MNA). Interventions: An interdisciplinary discharge team (specialist discharge planning nurse and accredited practicing dietitian) provided nutrition-focussed education, advice, service coordination and follow-up (home visits and telephone) for 6 weeks following hospitalisation Measurements: Nutritional intake, weight, functional status and MNA were recorded 6 and 12 weeks after discharge. Service intensity and changes to care were noted, and hospital readmissions recorded. Service feedback from patients and carers was sought using a brief questionnaire. Results: 12 participants were enrolled during the 6 week pilot (mean age 82 years, 50% male). All received 1-2 home visits, and 3-8 telephone calls. Four participants had new community services arranged, 4 were commenced on oral nutritional supplements, and 7 were referred to community dietetics services for follow-up. Two participants had a decline in MNA score of more than 10% at 12 week follow-up, while the remainder improved by at least 10%. Individualised care including community service coordination was valued by participants. Conclusion: The proposed model of care for older adults was feasible, acceptable to patients and carers, and associated with improved nutritional status at 12 weeks for most participants. The pilot data will be useful for design of intervention trials.