873 resultados para self-reported vote
Resumo:
- Background Exercise referral schemes (ERS) aim to identify inactive adults in the primary-care setting. The GP or health-care professional then refers the patient to a third-party service, with this service taking responsibility for prescribing and monitoring an exercise programme tailored to the needs of the individual. - Objective To assess the clinical effectiveness and cost-effectiveness of ERS for people with a diagnosed medical condition known to benefit from physical activity (PA). The scope of this report was broadened to consider individuals without a diagnosed condition who are sedentary. - Data sources MEDLINE; EMBASE; PsycINFO; The Cochrane Library, ISI Web of Science; SPORTDiscus and ongoing trial registries were searched (from 1990 to October 2009) and included study references were checked. - Methods Systematic reviews: the effectiveness of ERS, predictors of ERS uptake and adherence, and the cost-effectiveness of ERS; and the development of a decision-analytic economic model to assess cost-effectiveness of ERS. - Results Seven randomised controlled trials (UK, n = 5; non-UK, n = 2) met the effectiveness inclusion criteria, five comparing ERS with usual care, two compared ERS with an alternative PA intervention, and one to an ERS plus a self-determination theory (SDT) intervention. In intention-to-treat analysis, compared with usual care, there was weak evidence of an increase in the number of ERS participants who achieved a self-reported 90-150 minutes of at least moderate-intensity PA per week at 6-12 months' follow-up [pooled relative risk (RR) 1.11, 95% confidence interval 0.99 to 1.25]. There was no consistent evidence of a difference between ERS and usual care in the duration of moderate/vigorous intensity and total PA or other outcomes, for example physical fitness, serum lipids, health-related quality of life (HRQoL). There was no between-group difference in outcomes between ERS and alternative PA interventions or ERS plus a SDT intervention. None of the included trials separately reported outcomes in individuals with medical diagnoses. Fourteen observational studies and five randomised controlled trials provided a numerical assessment of ERS uptake and adherence (UK, n = 16; non-UK, n = 3). Women and older people were more likely to take up ERS but women, when compared with men, were less likely to adhere. The four previous economic evaluations identified suggest ERS to be a cost-effective intervention. Indicative incremental cost per quality-adjusted life-year (QALY) estimates for ERS for various scenarios were based on a de novo model-based economic evaluation. Compared with usual care, the mean incremental cost for ERS was £169 and the mean incremental QALY was 0.008, with the base-case incremental cost-effectiveness ratio at £20,876 per QALY in sedentary people without a medical condition and a cost per QALY of £14,618 in sedentary obese individuals, £12,834 in sedentary hypertensive patients, and £8414 for sedentary individuals with depression. Estimates of cost-effectiveness were highly sensitive to plausible variations in the RR for change in PA and cost of ERS. - Limitations We found very limited evidence of the effectiveness of ERS. The estimates of the cost-effectiveness of ERS are based on a simple analytical framework. The economic evaluation reports small differences in costs and effects, and findings highlight the wide range of uncertainty associated with the estimates of effectiveness and the impact of effectiveness on HRQoL. No data were identified as part of the effectiveness review to allow for adjustment of the effect of ERS in different populations. - Conclusions There remains considerable uncertainty as to the effectiveness of ERS for increasing activity, fitness or health indicators or whether they are an efficient use of resources in sedentary people without a medical diagnosis. We failed to identify any trial-based evidence of the effectiveness of ERS in those with a medical diagnosis. Future work should include randomised controlled trials assessing the cinical effectiveness and cost-effectivenesss of ERS in disease groups that may benefit from PA. - Funding The National Institute for Health Research Health Technology Assessment programme.
Resumo:
Objective: To systematically review the effectiveness of intervention studies promoting diet and physical activity (PA) in nurses. Data Source: English language manuscripts published between 1970 and 2014 in PubMed, Scopus, CINAHL, and EMBASE, as well as those accessed with the PICO tool, were reviewed. Study Inclusion and Exclusion Criteria: Inclusion criteria comprised (1) nurses/student nurses working in a health care setting and (2) interventions where PA and/or diet behaviors were the primary outcome. Exclusion criteria were (1) non–peer-reviewed articles or conference abstracts and (2) interventions focused on treatment of chronic conditions or lifestyle factors other than PA or diet in nurses. Data Extraction: Seventy-one full texts were retrieved and assessed for inclusion by two reviewers. Data were extracted by one reviewer and checked for accuracy by a second reviewer. Data Synthesis: Extracted data were synthesized in a tabular format and narrative summary. Results: Nine (n = 737 nurses) studies met the inclusion criteria. Quality of the studies was low to moderate. Four studies reported an increase in self-reported PA through structured exercise and goal setting. Dietary outcomes were generally positive, but were only measured in three studies with some limitations in the assessment methods. Two studies reported improved body composition without significant changes in diet or PA. Conclusions: Outcomes of interventions to change nurses' PA and diet behavior are promising, but inconsistent. Additional and higher quality interventions that include objective and validated outcome measures and appropriate process evaluation are required.
Resumo:
OBJECTIVES Based on self-reported measures, sedentary time has been associated with chronic disease and mortality. This study examined the validity of the wrist-worn GENEactiv accelerometer for measuring sedentary time (i.e. sitting and lying) by posture classification, during waking hours in free living adults. DESIGN Fifty-seven participants (age=18-55 years 52% male) were recruited using convenience sampling from a large metropolitan Australian university. METHODS Participants wore a GENEActiv accelerometer on their non-dominant wrist and an activPAL device attached to their right thigh for 24-h (00:00 to 23:59:59). Pearson's Correlation Coefficient was used to examine the convergent validity of the GENEActiv and the activPAL for estimating total sedentary time during waking hours. Agreement was illustrated using Bland and Altman plots, and intra-individual agreement for posture was assessed with the Kappa statistic. RESULTS Estimates of average total sedentary time over 24-h were 623 (SD 103) min/day from the GENEActiv, and 626 (SD 123) min/day from the activPAL, with an Intraclass Correlation Coefficient of 0.80 (95% confidence intervals 0.68-0.88). Bland and Altman plots showed slight underestimation of mean total sedentary time for GENEActiv relative to activPAL (mean difference: -3.44min/day), with moderate limits of agreement (-144 to 137min/day). Mean Kappa for posture was 0.53 (SD 0.12), indicating moderate agreement for this sample at the individual level. CONCLUSIONS The estimation of sedentary time by posture classification of the wrist-worn GENEActiv accelerometer was comparable to the activPAL. The GENEActiv may provide an alternative, easy to wear device based measure for descriptive estimates of sedentary time in population samples
Resumo:
Exposure to aqueous film forming foam (AFFF) was evaluated in 149 firefighters working at AFFF training facilities in Australia by analysis of PFOS and related compounds in serum. A questionnaire was designed to capture information about basic demographic factors, lifestyle factors and potential occupational exposure (such as work history and self-reported skin contact with foam). The results showed that a number of factors were associated with PFAA serum concentrations. Blood donation was found to be linked to low PFAA levels, and the concentrations of PFOS and PFHxS were found to be positively associated with years of jobs with AFFF contact. The highest levels of PFOS and PFHxS were one order of magnitude higher compared to the general population in Australia and Canada. Study participants who had worked ten years or less had levels of PFOS that were similar to or only slightly above those of the general population. This coincides with the phase out of 3M AFFF from all training facilities in 2003, and suggests that the exposures to PFOS and PFHxS in AFFF have declined in recent years. Self-reporting of skin contact and frequency of contact were used as an index of exposure. Using this index, there was no relationship between PFOS levels and skin exposure. This index of exposure is limited as it relies on self-report and it only considers skin exposure to AFFF, and does not capture other routes of potential exposure. Possible associations between serum PFAA concentrations and five biochemical outcomes were assessed. The outcomes were serum cholesterol, triglycerides, high-density lipoproteins, low density lipoproteins, and uric acid. No statistical associations between any of these endpoints and serum PFAA concentrations were observed.
Resumo:
It has been argued that transition points in life, such as the approach towards, and early years of retirement present key opportunities for interventions to improve the health of the population. Research has also highlighted inequalities in health status in the retired population and in response to interventions which should be addressed. We aimed to conduct a systematic review to synthesise international evidence on the types and effectiveness of interventions to increase physical activity among people around the time of retirement. A systematic review of literature was carried out between February 2014 and April 2015. Searches were not limited by language or location, but were restricted by date to studies published from 1990 onwards. Methods for identification of relevant studies included electronic database searching, reference list checking, and citation searching. Systematic search of the literature identified 104 papers which described study populations as being older adults. However, we found only one paper which specifically referred to their participants as being around the time of retirement. The intervention approaches for older adults encompassed: training of health care professionals; counselling and advice giving; group sessions; individual training sessions; in-home exercise programmes; in-home computer-delivered programmes; in-home telephone support; in-home diet and exercise programmes; and community-wide initiatives. The majority of papers reported some intervention effect, with evidence of positive outcomes for all types of programmes. A wide range of different measures were used to evaluate effectiveness, many were self-reported and few studies included evaluation of sedentary time. While the retirement transition is considered a significant point of life change, little research has been conducted to assess whether physical activity interventions at this time may be effective in promoting or maintaining activity, or reducing health inequalities. We were unable to find any evidence that the transition to retirement period was, or was not a significant point for intervention. Studies in older adults more generally indicated that a range of interventions might be effective for people around retirement age.
Resumo:
BACKGROUND The current impetus for developing alcohol and/or other drugs (AODs) workplace policies in Australia is to reduce workplace AOD impairment, improve safety, and prevent AOD-related injury in the workplace. For these policies to be effective, they need to be informed by scientific evidence. Evidence to inform the development and implementation of effective workplace AOD policies is currently lacking. There does not currently appear to be conclusive evidence for the effectiveness of workplace AOD policies in reducing impairment and preventing AOD-related injury. There is also no apparent evidence regarding which factors facilitate or impede the success of an AOD policy, or whether, for example, unsuccessful policy outcomes were due to poor policy or merely poor implementation of the policy. It was the aim of this research to undertake a process, impact, and outcome evaluation of a workplace AOD policy, and to contribute to the body of knowledge on the development and implementation of effective workplace AOD policies. METHODS The research setting was a state-based power-generating industry in Australia between May 2008 and May 2010. Participants for the process evaluation study were individuals who were integral to either the development or the implementation of the workplace AOD policy, or both of these processes (key informants), and comprised the majority of individuals who were involved in the process of developing and/or implementing the workplace AOD policy. The sample represented the two main groups of interest—management and union delegates/employee representatives—from all three of the participating organisations. For the impact and outcome evaluation studies, the population included all employees from the three participating organisations, and participants were all employees who consented to participate in the study and who completed both the pre-and post-policy implementation questionnaires. Qualitative methods in the form of interviews with key stakeholders were used to evaluate the process of developing and implementing the workplace AOD policy. In order to evaluate the impact of the policy with regard to the risk factors for workplace AOD impairment, and the outcome of the policy in terms of reducing workplace AOD impairment, quantitative methods in the form of a non-randomised single group pre- and post-test design were used. Changes from Time 1 (pre) to Time 2 (post) in the risk factors for workplace AOD impairment, and changes in the behaviour of interest—(self-reported) workplace AOD impairment—were measured. An integration of the findings from the process, impact, and outcome evaluation studies was undertaken using a combination of qualitative and quantitative methods. RESULTS For the process evaluation study Study respondents indicated that their policy was developed in the context of comparable industries across Australia developing workplace AOD policies, and that this was mainly out of concern for the deleterious health and safety impacts of workplace AOD impairment. Results from the process evaluation study also indicated that in developing and implementing the workplace AOD policy, there were mainly ‗winners', in terms of health and safety in the workplace. While there were some components of the development and implementation of the policy that were better done than others, and the process was expensive and took a long time, there were, overall, few unanticipated consequences to implementing the policy and it was reported to be thorough and of a high standard. Findings also indicated that overall the policy was developed and implemented according to best-practice in that: consultation during the policy development phase (with all the main stakeholders) was extensive; the policy was comprehensive; there was universal application of the policy to all employees; changes in the workplace (with regard to the policy) were gradual; and, the policy was publicised appropriately. Furthermore, study participants' responses indicated that the role of an independent external expert, who was trusted by all stakeholders, was integral to the success of the policy. For the impact and outcome evaluation studies Notwithstanding the limitations of pre- and post-test study designs with regard to attributing cause to the intervention, the findings from the impact evaluation study indicated that following policy implementation, statistically significant positive changes with regard to workplace AOD impairment were recorded for the following variables (risk factors for workplace AOD impairment): Knowledge; Attitudes; Perceived Behavioural Control; Perceptions of the Certainty of being punished for coming to work impaired by AODs; Perceptions of the Swiftness of punishment for coming to work impaired by AODs; and Direct and Indirect Experience with Punishment Avoidance for workplace AOD impairment. There were, however, no statistically significant positive changes following policy implementation for Behavioural Intentions, Subjective Norms, and Perceptions of the Severity of punishment for workplace AOD impairment. With regard to the outcome evaluation, there was a statistically significant reduction in self-reported workplace AOD impairment following the implementation of the policy. As with the impact evaluation, these findings need to be interpreted in light of the limitations of the study design in being able to attribute cause to the intervention alone. The findings from the outcome evaluation study also showed that while a positive change in self-reported workplace AOD impairment following implementation of the policy did not appear to be related to gender, age group, or employment type, it did appear to be related to levels of employee general alcohol use, cannabis use, site type, and employment role. Integration of the process, impact, and outcome evaluation studies There appeared to be qualitative support for the relationship between the process of developing and implementing the policy, and the impact of the policy in changing the risk factors for workplace AOD impairment. That is, overall the workplace AOD policy was developed and implemented well and, following its implementation, there were positive changes in the majority of measured risk factors for workplace AOD impairment. Quantitative findings lend further support for a relationship between the process and impact of the policy, in that there was a statistically significant association between employee perceived fidelity of the policy (related to the process of the policy) and positive changes in some risk factors for workplace AOD impairment (representing the impact of the policy). Findings also indicated support for the relationship between the impact of the policy in changing the risk factors for workplace AOD impairment and the outcome of the policy in reducing workplace AOD impairment: positive changes in the risk factors for workplace AOD impairment (impact) were related to positive changes in self reported workplace AOD impairment (representing the main goal and outcome of the policy). CONCLUSIONS The findings from the research indicate support for the conclusion that the policy was appropriately implemented and that it achieved its objectives and main goal. The Doctoral research findings also addressed a number of gaps in the literature on workplace AOD impairment, namely: the likely effectiveness of AOD policies for reducing AOD impairment in the workplace, which factors in the development and implementation of a workplace AOD policy are likely to facilitate or impede the effectiveness of the policy to reduce workplace AOD impairment, and which employee groups are less likely to respond well to policies of this type. The findings from this research not only represent an example of translational, applied research—through the evaluation of the study industry's policy—but also add to the body of knowledge on workplace AOD policies and provide policy-makers with evidence which may be useful in the development and implementation of effective workplace AOD policies. Importantly, the findings espouse the importance of scientific evidence in the development, implementation, and evaluation of workplace AOD policies.
Resumo:
Head motion (HM) is a well known confound in analyses of functional MRI (fMRI) data. Neuroimaging researchers therefore typically treat HM as a nuisance covariate in their analyses. Even so, it is possible that HM shares a common genetic influence with the trait of interest. Here we investigate the extent to which this relationship is due to shared genetic factors, using HM extracted from resting-state fMRI and maternal and self report measures of Inattention and Hyperactivity-Impulsivity from the Strengths and Weaknesses of ADHD Symptoms and Normal Behaviour (SWAN) scales. Our sample consisted of healthy young adult twins (N = 627 (63% females) including 95 MZ and 144 DZ twin pairs, mean age 22, who had mother-reported SWAN; N = 725 (58% females) including 101 MZ and 156 DZ pairs, mean age 25, with self reported SWAN). This design enabled us to distinguish genetic from environmental factors in the association between head movement and ADHD scales. HM was moderately correlated with maternal reports of Inattention (r = 0.17, p-value = 7.4E-5) and Hyperactivity-Impulsivity (r = 0.16, p-value = 2.9E-4), and these associations were mainly due to pleiotropic genetic factors with genetic correlations [95% CIs] of rg = 0.24 [0.02, 0.43] and rg = 0.23 [0.07, 0.39]. Correlations between self-reports and HM were not significant, due largely to increased measurement error. These results indicate that treating HM as a nuisance covariate in neuroimaging studies of ADHD will likely reduce power to detect between-group effects, as the implicit assumption of independence between HM and Inattention or Hyperactivity-Impulsivity is not warranted. The implications of this finding are problematic for fMRI studies of ADHD, as failing to apply HM correction is known to increase the likelihood of false positives. We discuss two ways to circumvent this problem: censoring the motion contaminated frames of the RS-fMRI scan or explicitly modeling the relationship between HM and Inattention or Hyperactivity-Impulsivity
Resumo:
Background and Aims Considerable variation has been documented with fleet safety interventions’ abilities to create lasting behavioural change, and research has neglected to consider employees’ perceptions regarding the effectiveness of fleet interventions. This is a critical oversight as employees’ beliefs and acceptance levels (as well as the perceived organisational commitment to safety) can ultimately influence levels of effectiveness, and this study aimed to examine such perceptions in Australian fleet settings. Method 679 employees sourced from four Australian organisations completed a safety climate questionnaire as well as provided perspectives about the effectiveness of 35 different safety initiatives. Results Countermeasures that were perceived as most effective were a mix of human and engineering-based approaches: - (a) purchasing safer vehicles; - (b) investigating serious vehicle incidents, and; - (c) practical driver skills training. In contrast, least effective countermeasures were considered to be: - (a) signing a promise card; - (b) advertising a company’s phone number on the back of cars for complaints and compliments, and; - (c) communicating cost benefits of road safety to employees. No significant differences in employee perceptions were identified based on age, gender, employees’ self-reported crash involvement or employees’ self-reported traffic infringement history. Perceptions of safety climate were identified to be “moderate” but were not linked to self-reported crash or traffic infringement history. However, higher levels of safety climate were positively correlated with perceived effectiveness of some interventions. Conclusion Taken together, employees believed occupational road safety risks could best be managed by the employer by implementing a combination of engineering and human resource initiatives to enhance road safety. This paper will further outline the key findings in regards to practice as well as provide direction for future research.
Resumo:
- Objective The purpose of this research was to explore which demographic and health status variables moderated the relationship between psychological distress and three nutrition indicators: the consumption of fruits, vegetables and takeaway. - Method We analysed data from the 2009 Self-Reported Health Status Survey Report collected in the state of Queensland, Australia. Adults (N = 6881) reported several demographic and health status variables. Moderated logistic regression models were estimated separately for the three nutrition indicators, testing as moderators demographic (age, gender, educational attainment, household income, remoteness, and area-level socioeconomic status) and health status indicators (body mass index, high cholesterol, high blood pressure, and diabetes status). - Results Several significant interactions emerged between psychological distress, demographic (age, area-level socioeconomic status, and income level), and health status variables (body mass index, diabetes status) in predicting the nutrition indicators. Relationships between distress and the nutrition indicators were not significantly different by gender, remoteness, educational attainment, high cholesterol status, and high blood pressure status. - Conclusions The associations between psychological distress and several nutrition indicators differ amongst population subgroups. These findings suggest that in distressed adults, age, area-level socio-economic status, income level, body mass index, and diabetes status may serve as protective or risk factors through increasing or decreasing the likelihood of meeting nutritional guidelines. Public health interventions for improving dietary behaviours and nutrition may be more effective if they take into account the moderators identified in this study rather than using global interventions.
Resumo:
Regional studies globally have a strong focus on understanding the causes of variation in the economic performance and wellbeing of regions and this emphasis acknowledges that the strength of the local or regional economy plays a determinant role in shaping quality of life. Regional research has been less active in considering spatial variation in other factors that are critical to individual and societal wellbeing. For example, the regional studies community has been absent from the debate on the social determinants of health and how these influences vary spatially. This paper considers the results of a cross sectional survey of Australians aged 65 and over that focussed on social connections and wellbeing. It examines regional variations in the incidence of social isolation within the older population. It finds that while the incidence of self-reported social isolation amongst older persons is broadly consistent with earlier studies, it demonstrates a spatial patterning that is unexpected. The paper considers community-building activities in addressing the impacts of social isolation, including the role of urban design, and suggests that there is a need to supplement the national overview presented there through more detailed studies focussed on individual localities.
Resumo:
Background Best practice clinical health care is widely recognised to be founded on evidence based practice. Enhancing evidence based practice via the rapid translation of new evidence into every day clinical practice is fundamental to the success of health care and in turn health care professions. There is little known about the collective research capacity and culture of the podiatry profession across Australia. Thus, the aim of this study was to investigate the research capacity and culture of the podiatry profession within Australia and determine if there were any differences between podiatrists working in different health sectors and workplaces. Method All registered podiatrists were eligible to participate in a cross-sectional online survey. The Australian Podiatry Associations disseminated the survey and all podiatrists were encouraged to distribute it to colleagues. The Research Capacity and Culture (RCC) tool was used to collect all research capacity and culture item variables using a 10-point scale (1 = lowest; 10 = highest). Additional demographic, workplace and health sector data variables were also collected. Mann–Whitney-U, Kruskal–Wallis and logistic regression analyses were used to determine any difference between health sectors and workplaces. Word cloud analysis was used for qualitative responses of individual motivators and barriers to research culture. Results There were 232 fully completed surveys (6% of Australian registered podiatrists). Overall respondents reported low success or skills (Median rating < 4) on the majority of individual success or skill items. Podiatrists working in multi-practitioner workplaces reported higher individual success or skills in the majority of items compared with sole practitioners (p < 0.05). Non-clinical and public health sector podiatrists reported significantly higher post-graduate study enrolment or completion, research activity participation, provisions to undertake research and individual success or skill than those working privately. Conclusions This study suggests that podiatrists in Australia report similar low levels of research success or skill to those reported in other allied health professions. The workplace setting and health sector seem to play key roles in self reported research success and skills. This is important knowledge for podiatrists and researchers aiming to translate research evidence into clinical practice.
Resumo:
This study is part of the Mood Disorders Project conducted by the Department of Mental Health and Alcohol Research, National Public Health Institute, and consists of a general population survey sample and a major depressive disorder (MDD) patient cohort from Vantaa Depression Study (VDS). The general population survey study was conducted in 2003 in the cities of Espoo and Vantaa. The VDS is a collaborative depression research project between the Department of Mental Health and Alcohol Research of the National Public Health Institute and the Department of Psychiatry of the Peijas Medical Care District (PMCD) beginning in 1997. It is a prospective, naturalistic cohort study of 269 secondary-level care psychiatric out- and inpatients with a new episode of Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) MDD. In the general population survey study, a total of 900 participants (300 from Espoo, 600 from Vantaa) aged 20 70 years were randomly drawn from the Population Register Centre in Finland. A self-report booklet, including the Eysenck Personality Inventory (EPI), the Temperament and Character Inventory Revised (TCI-R), the Beck Depression Inventory and the Beck Anxiety Inventory was mailed to all subjects. Altogether 441 participants responded (94 returned only the shortened version without TCI-R) and gave their informed consent. VDS involved screening all patients aged 20-60 years (n=806) in the PMCD for a possible new episode of DSM-IV MDD. 542 consenting patients were interviewed with a semi-structured interview (the WHO Schedules for Clinical Assessment in Neuropsychiatry, version 2.0). 269 patients with a current DSM-IV MDD were included in the study and further interviewed with semi-structured interviews to assess all other axis I and II psychiatric diagnoses. Exclusion criteria were DSM-IV bipolar I and II, schizoaffective disorder, schizophrenia or another psychosis, organic and substance-induced mood disorders. In the present study are included those 193 (139 females, 54 males) individuals who could be followed up at both 6 and 18 months, and their depression had remained unipolar. Personality was investigated with the EPI. Personality dimensions associated not only to the symptoms of depression, but also to the symptoms of anxiety among general population and in depressive patients, as well as to comorbid disorders in MDD patients, supporting the dimensional view of depression and anxiety. Among the general population High Harm Avoidance and low Self-Directedness associated moderately, whereas low extraversion and high neuroticism strongly with the depressive and anxiety symptoms. The personality dimensions, especially high Harm Avoidance, low Self-Directedness and high neuroticism were also somewhat predictive of self-reported use of health care services for psychiatric reasons, and lifetime mental disorder. Moreover, high Harm Avoidance associated with a family history of mental disorder. In depressive patients, neuroticism scores were found to decline markedly and extraversion scores to increase somewhat with recovery. The predictive value of the changes in symptoms of depression and anxiety in explaining follow-up neuroticism was about 1/3 of that of baseline neuroticism. In contrast to neuroticism, the scores of extraversion showed no dependence on the symptoms of anxiety, and the change in the symptoms of depression explained only 1/20 of the follow-up extraversion compared with baseline extraversion. No evidence was found of the scar effect during a one-year follow-up period. Finally, even after controlling for symptoms of both depression and anxiety, depressive patients had a somewhat higher level of neuroticism (odds ratio 1.11, p=0.001) and a slightly lower level of extraversion (odds ratio 0.92, p=0.003) than subjects in the general population. Among MDD patients, a positive dose-exposure relationship appeared to exist between neuroticism and prevalence and number of comorbid axis I and II disorders. A negative relationship existed between level of extraversion and prevalence of comorbid social phobia and cluster C personality disorders. Personality dimensions are associated with the symptoms of depression and anxiety. Futhermore these findings support the hypothesis that high neuroticism and somewhat low extraversion might be vulnerability factors for MDD, and that high neuroticism and low extraversion predispose to comorbid axis I and II disorders among patients with MDD.
Resumo:
The purpose of this study was to estimate the prevalence and distribution of reduced visual acuity, major chronic eye diseases, and subsequent need for eye care services in the Finnish adult population comprising persons aged 30 years and older. In addition, we analyzed the effect of decreased vision on functioning and need for assistance using the World Health Organization’s (WHO) International Classification of Functioning, Disability, and Health (ICF) as a framework. The study was based on the Health 2000 health examination survey, a nationally representative population-based comprehensive survey of health and functional capacity carried out in 2000 to 2001 in Finland. The study sample representing the Finnish population aged 30 years and older was drawn by a two-stage stratified cluster sampling. The Health 2000 survey included a home interview and a comprehensive health examination conducted at a nearby screening center. If the invited participants did not attend, an abridged examination was conducted at home or in an institution. Based on our finding in participants, the great majority (96%) of Finnish adults had at least moderate visual acuity (VA ≥ 0.5) with current refraction correction, if any. However, in the age group 75–84 years the prevalence decreased to 81%, and after 85 years to 46%. In the population aged 30 years and older, the prevalence of habitual visual impairment (VA ≤ 0.25) was 1.6%, and 0.5% were blind (VA < 0.1). The prevalence of visual impairment increased significantly with age (p < 0.001), and after the age of 65 years the increase was sharp. Visual impairment was equally common for both sexes (OR 1.20, 95% CI 0.82 – 1.74). Based on self-reported and/or register-based data, the estimated total prevalences of cataract, glaucoma, age-related maculopathy (ARM), and diabetic retinopathy (DR) in the study population were 10%, 5%, 4%, and 1%, respectively. The prevalence of all of these chronic eye diseases increased with age (p < 0.001). Cataract and glaucoma were more common in women than in men (OR 1.55, 95% CI 1.26 – 1.91 and OR 1.57, 95% CI 1.24 – 1.98, respectively). The most prevalent eye diseases in people with visual impairment (VA ≤ 0.25) were ARM (37%), unoperated cataract (27%), glaucoma (22%), and DR (7%). One-half (58%) of visually impaired people had had a vision examination during the past five years, and 79% had received some vision rehabilitation services, mainly in the form of spectacles (70%). Only one-third (31%) had received formal low vision rehabilitation (i.e., fitting of low vision aids, receiving patient education, training for orientation and mobility, training for activities of daily living (ADL), or consultation with a social worker). People with low vision (VA 0.1 – 0.25) were less likely to have received formal low vision rehabilitation, magnifying glasses, or other low vision aids than blind people (VA < 0.1). Furthermore, low cognitive capacity and living in an institution were associated with limited use of vision rehabilitation services. Of the visually impaired living in the community, 71% reported a need for assistance and 24% had an unmet need for assistance in everyday activities. Prevalence of ADL, instrumental activities of daily living (IADL), and mobility increased with decreasing VA (p < 0.001). Visually impaired persons (VA ≤ 0.25) were four times more likely to have ADL disabilities than those with good VA (VA ≥ 0.8) after adjustment for sociodemographic and behavioral factors and chronic conditions (OR 4.36, 95% CI 2.44 – 7.78). Limitations in IADL and measured mobility were five times as likely (OR 4.82, 95% CI 2.38 – 9.76 and OR 5.37, 95% CI 2.44 – 7.78, respectively) and self-reported mobility limitations were three times as likely (OR 3.07, 95% CI 1.67 – 9.63) as in persons with good VA. The high prevalence of age-related eye diseases and subsequent visual impairment in the fastest growing segment of the population will result in a substantial increase in the demand for eye care services in the future. Many of the visually impaired, especially older persons with decreased cognitive capacity or living in an institution, have not had a recent vision examination and lack adequate low vision rehabilitation. This highlights the need for regular evaluation of visual function in the elderly and an active dissemination of information about rehabilitation services. Decreased VA is strongly associated with functional limitations, and even a slight decrease in VA was found to be associated with limited functioning. Thus, continuous efforts are needed to identify and treat eye diseases to maintain patients’ quality of life and to alleviate the social and economic burden of serious eye diseases.
Resumo:
The purpose of this study was to evaluate subjective food-related gastrointestinal symptoms and their relation to cow’s milk by determining the genotype of adult-type hypolactasia, measuring antibodies against milk protein, and screening the most common cause for secondary hypolactasia, namely coeliac disease. The whole study group comprised 1900 adults who gave a blood sample for the study when they attended a health care centre laboratory for various reasons. Of these 1885 (99%) completed a questionnaire on food-related gastrointestinal symptoms. Study No. I evaluated the prevalence of adult-type hypolactasia and its correlation to self-reported milk induced gastrointestinal symptoms. The testing for hypolactasia was done by determination of the C/T-13910 genotypes of the study subjects. The results show that patients with the C/C-13910 genotype associated with adult type hypolactasia consume less milk than those with C/T-13910 and T/T-13910 genotypes. Study No. II evaluated the prevalence and clinical characteristics of undiagnosed coeliac disease in the whole study population with transglutaminase and endomysium antibodies and their correlation with gastrointestinal symptoms. The prevalence of coeliac disease was 2 %, which is surprisingly high. Serum transglutaminase and endomysium antibodies are valuable tools for recognising an undiagnosed coeliac disease in outpatient clinics. In the study No. III the evaluation of milk protein IgE related hypersensitivity was carried out by stratifying all 756 study subjects with milk related problems and randomly choosing 100 age and sex matched controls with no such symptoms from the rest of the original study group. In the study No. IV 400 serum samples were randomly selected for analyzing milk protein related IgA and IgG antibodies and their correlation to milk related GI-symptoms. The measurement of milk protein IgA, IgE or IgG (studies No. III and IV) did not correlate clearly to milk induced symptoms and gave no clinically significant information; hence their measurement is not encouraged in outpatient clinics. In conclusion, adult type hypolactasia is often considered the reason for gastrointestinal symptoms in adults and determination of the C/T-13910 genotypes is a practical way of diagnosing adult type hypolactasia in an outpatient setting. Undiagnosed coeliac disease, should be actively screened and diagnosed in order to apply a gluten free diet and avoid the GI-symptoms and nutritional deficiencies. Cow’s milk hypersensitivity in the adult population is difficult to diagnose since the mechanism in which it is mediated is still unclear. Measuring of cow’s milk protein specific antibodies IgE, IgA or IgG do not correlate with subjective milk-related GI-symptoms.
Resumo:
This paper explores the obstacles associated with designing video game levels for the purpose of objectively measuring flow. We sought to create three video game levels capable of inducing a flow state, an overload state (low-flow), and a boredom state (low-flow). A pilot study, in which participants self-reported levels of flow after playing all three game levels, was undertaken. Unexpected results point to the challenges of operationalising flow in video game research, obstacles in experimental design for invoking flow and low-flow, concerns about flow as a construct for measuring video game enjoyment, the applicability of self-report flow scales, and the experience of flow in video game play despite substantial challenge-skill differences.