880 resultados para self-reported empathy
Resumo:
BACKGROUND The current impetus for developing alcohol and/or other drugs (AODs) workplace policies in Australia is to reduce workplace AOD impairment, improve safety, and prevent AOD-related injury in the workplace. For these policies to be effective, they need to be informed by scientific evidence. Evidence to inform the development and implementation of effective workplace AOD policies is currently lacking. There does not currently appear to be conclusive evidence for the effectiveness of workplace AOD policies in reducing impairment and preventing AOD-related injury. There is also no apparent evidence regarding which factors facilitate or impede the success of an AOD policy, or whether, for example, unsuccessful policy outcomes were due to poor policy or merely poor implementation of the policy. It was the aim of this research to undertake a process, impact, and outcome evaluation of a workplace AOD policy, and to contribute to the body of knowledge on the development and implementation of effective workplace AOD policies. METHODS The research setting was a state-based power-generating industry in Australia between May 2008 and May 2010. Participants for the process evaluation study were individuals who were integral to either the development or the implementation of the workplace AOD policy, or both of these processes (key informants), and comprised the majority of individuals who were involved in the process of developing and/or implementing the workplace AOD policy. The sample represented the two main groups of interest—management and union delegates/employee representatives—from all three of the participating organisations. For the impact and outcome evaluation studies, the population included all employees from the three participating organisations, and participants were all employees who consented to participate in the study and who completed both the pre-and post-policy implementation questionnaires. Qualitative methods in the form of interviews with key stakeholders were used to evaluate the process of developing and implementing the workplace AOD policy. In order to evaluate the impact of the policy with regard to the risk factors for workplace AOD impairment, and the outcome of the policy in terms of reducing workplace AOD impairment, quantitative methods in the form of a non-randomised single group pre- and post-test design were used. Changes from Time 1 (pre) to Time 2 (post) in the risk factors for workplace AOD impairment, and changes in the behaviour of interest—(self-reported) workplace AOD impairment—were measured. An integration of the findings from the process, impact, and outcome evaluation studies was undertaken using a combination of qualitative and quantitative methods. RESULTS For the process evaluation study Study respondents indicated that their policy was developed in the context of comparable industries across Australia developing workplace AOD policies, and that this was mainly out of concern for the deleterious health and safety impacts of workplace AOD impairment. Results from the process evaluation study also indicated that in developing and implementing the workplace AOD policy, there were mainly ‗winners', in terms of health and safety in the workplace. While there were some components of the development and implementation of the policy that were better done than others, and the process was expensive and took a long time, there were, overall, few unanticipated consequences to implementing the policy and it was reported to be thorough and of a high standard. Findings also indicated that overall the policy was developed and implemented according to best-practice in that: consultation during the policy development phase (with all the main stakeholders) was extensive; the policy was comprehensive; there was universal application of the policy to all employees; changes in the workplace (with regard to the policy) were gradual; and, the policy was publicised appropriately. Furthermore, study participants' responses indicated that the role of an independent external expert, who was trusted by all stakeholders, was integral to the success of the policy. For the impact and outcome evaluation studies Notwithstanding the limitations of pre- and post-test study designs with regard to attributing cause to the intervention, the findings from the impact evaluation study indicated that following policy implementation, statistically significant positive changes with regard to workplace AOD impairment were recorded for the following variables (risk factors for workplace AOD impairment): Knowledge; Attitudes; Perceived Behavioural Control; Perceptions of the Certainty of being punished for coming to work impaired by AODs; Perceptions of the Swiftness of punishment for coming to work impaired by AODs; and Direct and Indirect Experience with Punishment Avoidance for workplace AOD impairment. There were, however, no statistically significant positive changes following policy implementation for Behavioural Intentions, Subjective Norms, and Perceptions of the Severity of punishment for workplace AOD impairment. With regard to the outcome evaluation, there was a statistically significant reduction in self-reported workplace AOD impairment following the implementation of the policy. As with the impact evaluation, these findings need to be interpreted in light of the limitations of the study design in being able to attribute cause to the intervention alone. The findings from the outcome evaluation study also showed that while a positive change in self-reported workplace AOD impairment following implementation of the policy did not appear to be related to gender, age group, or employment type, it did appear to be related to levels of employee general alcohol use, cannabis use, site type, and employment role. Integration of the process, impact, and outcome evaluation studies There appeared to be qualitative support for the relationship between the process of developing and implementing the policy, and the impact of the policy in changing the risk factors for workplace AOD impairment. That is, overall the workplace AOD policy was developed and implemented well and, following its implementation, there were positive changes in the majority of measured risk factors for workplace AOD impairment. Quantitative findings lend further support for a relationship between the process and impact of the policy, in that there was a statistically significant association between employee perceived fidelity of the policy (related to the process of the policy) and positive changes in some risk factors for workplace AOD impairment (representing the impact of the policy). Findings also indicated support for the relationship between the impact of the policy in changing the risk factors for workplace AOD impairment and the outcome of the policy in reducing workplace AOD impairment: positive changes in the risk factors for workplace AOD impairment (impact) were related to positive changes in self reported workplace AOD impairment (representing the main goal and outcome of the policy). CONCLUSIONS The findings from the research indicate support for the conclusion that the policy was appropriately implemented and that it achieved its objectives and main goal. The Doctoral research findings also addressed a number of gaps in the literature on workplace AOD impairment, namely: the likely effectiveness of AOD policies for reducing AOD impairment in the workplace, which factors in the development and implementation of a workplace AOD policy are likely to facilitate or impede the effectiveness of the policy to reduce workplace AOD impairment, and which employee groups are less likely to respond well to policies of this type. The findings from this research not only represent an example of translational, applied research—through the evaluation of the study industry's policy—but also add to the body of knowledge on workplace AOD policies and provide policy-makers with evidence which may be useful in the development and implementation of effective workplace AOD policies. Importantly, the findings espouse the importance of scientific evidence in the development, implementation, and evaluation of workplace AOD policies.
Resumo:
Head motion (HM) is a well known confound in analyses of functional MRI (fMRI) data. Neuroimaging researchers therefore typically treat HM as a nuisance covariate in their analyses. Even so, it is possible that HM shares a common genetic influence with the trait of interest. Here we investigate the extent to which this relationship is due to shared genetic factors, using HM extracted from resting-state fMRI and maternal and self report measures of Inattention and Hyperactivity-Impulsivity from the Strengths and Weaknesses of ADHD Symptoms and Normal Behaviour (SWAN) scales. Our sample consisted of healthy young adult twins (N = 627 (63% females) including 95 MZ and 144 DZ twin pairs, mean age 22, who had mother-reported SWAN; N = 725 (58% females) including 101 MZ and 156 DZ pairs, mean age 25, with self reported SWAN). This design enabled us to distinguish genetic from environmental factors in the association between head movement and ADHD scales. HM was moderately correlated with maternal reports of Inattention (r = 0.17, p-value = 7.4E-5) and Hyperactivity-Impulsivity (r = 0.16, p-value = 2.9E-4), and these associations were mainly due to pleiotropic genetic factors with genetic correlations [95% CIs] of rg = 0.24 [0.02, 0.43] and rg = 0.23 [0.07, 0.39]. Correlations between self-reports and HM were not significant, due largely to increased measurement error. These results indicate that treating HM as a nuisance covariate in neuroimaging studies of ADHD will likely reduce power to detect between-group effects, as the implicit assumption of independence between HM and Inattention or Hyperactivity-Impulsivity is not warranted. The implications of this finding are problematic for fMRI studies of ADHD, as failing to apply HM correction is known to increase the likelihood of false positives. We discuss two ways to circumvent this problem: censoring the motion contaminated frames of the RS-fMRI scan or explicitly modeling the relationship between HM and Inattention or Hyperactivity-Impulsivity
Resumo:
Background and Aims Considerable variation has been documented with fleet safety interventions’ abilities to create lasting behavioural change, and research has neglected to consider employees’ perceptions regarding the effectiveness of fleet interventions. This is a critical oversight as employees’ beliefs and acceptance levels (as well as the perceived organisational commitment to safety) can ultimately influence levels of effectiveness, and this study aimed to examine such perceptions in Australian fleet settings. Method 679 employees sourced from four Australian organisations completed a safety climate questionnaire as well as provided perspectives about the effectiveness of 35 different safety initiatives. Results Countermeasures that were perceived as most effective were a mix of human and engineering-based approaches: - (a) purchasing safer vehicles; - (b) investigating serious vehicle incidents, and; - (c) practical driver skills training. In contrast, least effective countermeasures were considered to be: - (a) signing a promise card; - (b) advertising a company’s phone number on the back of cars for complaints and compliments, and; - (c) communicating cost benefits of road safety to employees. No significant differences in employee perceptions were identified based on age, gender, employees’ self-reported crash involvement or employees’ self-reported traffic infringement history. Perceptions of safety climate were identified to be “moderate” but were not linked to self-reported crash or traffic infringement history. However, higher levels of safety climate were positively correlated with perceived effectiveness of some interventions. Conclusion Taken together, employees believed occupational road safety risks could best be managed by the employer by implementing a combination of engineering and human resource initiatives to enhance road safety. This paper will further outline the key findings in regards to practice as well as provide direction for future research.
Resumo:
- Objective The purpose of this research was to explore which demographic and health status variables moderated the relationship between psychological distress and three nutrition indicators: the consumption of fruits, vegetables and takeaway. - Method We analysed data from the 2009 Self-Reported Health Status Survey Report collected in the state of Queensland, Australia. Adults (N = 6881) reported several demographic and health status variables. Moderated logistic regression models were estimated separately for the three nutrition indicators, testing as moderators demographic (age, gender, educational attainment, household income, remoteness, and area-level socioeconomic status) and health status indicators (body mass index, high cholesterol, high blood pressure, and diabetes status). - Results Several significant interactions emerged between psychological distress, demographic (age, area-level socioeconomic status, and income level), and health status variables (body mass index, diabetes status) in predicting the nutrition indicators. Relationships between distress and the nutrition indicators were not significantly different by gender, remoteness, educational attainment, high cholesterol status, and high blood pressure status. - Conclusions The associations between psychological distress and several nutrition indicators differ amongst population subgroups. These findings suggest that in distressed adults, age, area-level socio-economic status, income level, body mass index, and diabetes status may serve as protective or risk factors through increasing or decreasing the likelihood of meeting nutritional guidelines. Public health interventions for improving dietary behaviours and nutrition may be more effective if they take into account the moderators identified in this study rather than using global interventions.
Resumo:
Regional studies globally have a strong focus on understanding the causes of variation in the economic performance and wellbeing of regions and this emphasis acknowledges that the strength of the local or regional economy plays a determinant role in shaping quality of life. Regional research has been less active in considering spatial variation in other factors that are critical to individual and societal wellbeing. For example, the regional studies community has been absent from the debate on the social determinants of health and how these influences vary spatially. This paper considers the results of a cross sectional survey of Australians aged 65 and over that focussed on social connections and wellbeing. It examines regional variations in the incidence of social isolation within the older population. It finds that while the incidence of self-reported social isolation amongst older persons is broadly consistent with earlier studies, it demonstrates a spatial patterning that is unexpected. The paper considers community-building activities in addressing the impacts of social isolation, including the role of urban design, and suggests that there is a need to supplement the national overview presented there through more detailed studies focussed on individual localities.
Resumo:
Background Best practice clinical health care is widely recognised to be founded on evidence based practice. Enhancing evidence based practice via the rapid translation of new evidence into every day clinical practice is fundamental to the success of health care and in turn health care professions. There is little known about the collective research capacity and culture of the podiatry profession across Australia. Thus, the aim of this study was to investigate the research capacity and culture of the podiatry profession within Australia and determine if there were any differences between podiatrists working in different health sectors and workplaces. Method All registered podiatrists were eligible to participate in a cross-sectional online survey. The Australian Podiatry Associations disseminated the survey and all podiatrists were encouraged to distribute it to colleagues. The Research Capacity and Culture (RCC) tool was used to collect all research capacity and culture item variables using a 10-point scale (1 = lowest; 10 = highest). Additional demographic, workplace and health sector data variables were also collected. Mann–Whitney-U, Kruskal–Wallis and logistic regression analyses were used to determine any difference between health sectors and workplaces. Word cloud analysis was used for qualitative responses of individual motivators and barriers to research culture. Results There were 232 fully completed surveys (6% of Australian registered podiatrists). Overall respondents reported low success or skills (Median rating < 4) on the majority of individual success or skill items. Podiatrists working in multi-practitioner workplaces reported higher individual success or skills in the majority of items compared with sole practitioners (p < 0.05). Non-clinical and public health sector podiatrists reported significantly higher post-graduate study enrolment or completion, research activity participation, provisions to undertake research and individual success or skill than those working privately. Conclusions This study suggests that podiatrists in Australia report similar low levels of research success or skill to those reported in other allied health professions. The workplace setting and health sector seem to play key roles in self reported research success and skills. This is important knowledge for podiatrists and researchers aiming to translate research evidence into clinical practice.
Resumo:
This study is part of the Mood Disorders Project conducted by the Department of Mental Health and Alcohol Research, National Public Health Institute, and consists of a general population survey sample and a major depressive disorder (MDD) patient cohort from Vantaa Depression Study (VDS). The general population survey study was conducted in 2003 in the cities of Espoo and Vantaa. The VDS is a collaborative depression research project between the Department of Mental Health and Alcohol Research of the National Public Health Institute and the Department of Psychiatry of the Peijas Medical Care District (PMCD) beginning in 1997. It is a prospective, naturalistic cohort study of 269 secondary-level care psychiatric out- and inpatients with a new episode of Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) MDD. In the general population survey study, a total of 900 participants (300 from Espoo, 600 from Vantaa) aged 20 70 years were randomly drawn from the Population Register Centre in Finland. A self-report booklet, including the Eysenck Personality Inventory (EPI), the Temperament and Character Inventory Revised (TCI-R), the Beck Depression Inventory and the Beck Anxiety Inventory was mailed to all subjects. Altogether 441 participants responded (94 returned only the shortened version without TCI-R) and gave their informed consent. VDS involved screening all patients aged 20-60 years (n=806) in the PMCD for a possible new episode of DSM-IV MDD. 542 consenting patients were interviewed with a semi-structured interview (the WHO Schedules for Clinical Assessment in Neuropsychiatry, version 2.0). 269 patients with a current DSM-IV MDD were included in the study and further interviewed with semi-structured interviews to assess all other axis I and II psychiatric diagnoses. Exclusion criteria were DSM-IV bipolar I and II, schizoaffective disorder, schizophrenia or another psychosis, organic and substance-induced mood disorders. In the present study are included those 193 (139 females, 54 males) individuals who could be followed up at both 6 and 18 months, and their depression had remained unipolar. Personality was investigated with the EPI. Personality dimensions associated not only to the symptoms of depression, but also to the symptoms of anxiety among general population and in depressive patients, as well as to comorbid disorders in MDD patients, supporting the dimensional view of depression and anxiety. Among the general population High Harm Avoidance and low Self-Directedness associated moderately, whereas low extraversion and high neuroticism strongly with the depressive and anxiety symptoms. The personality dimensions, especially high Harm Avoidance, low Self-Directedness and high neuroticism were also somewhat predictive of self-reported use of health care services for psychiatric reasons, and lifetime mental disorder. Moreover, high Harm Avoidance associated with a family history of mental disorder. In depressive patients, neuroticism scores were found to decline markedly and extraversion scores to increase somewhat with recovery. The predictive value of the changes in symptoms of depression and anxiety in explaining follow-up neuroticism was about 1/3 of that of baseline neuroticism. In contrast to neuroticism, the scores of extraversion showed no dependence on the symptoms of anxiety, and the change in the symptoms of depression explained only 1/20 of the follow-up extraversion compared with baseline extraversion. No evidence was found of the scar effect during a one-year follow-up period. Finally, even after controlling for symptoms of both depression and anxiety, depressive patients had a somewhat higher level of neuroticism (odds ratio 1.11, p=0.001) and a slightly lower level of extraversion (odds ratio 0.92, p=0.003) than subjects in the general population. Among MDD patients, a positive dose-exposure relationship appeared to exist between neuroticism and prevalence and number of comorbid axis I and II disorders. A negative relationship existed between level of extraversion and prevalence of comorbid social phobia and cluster C personality disorders. Personality dimensions are associated with the symptoms of depression and anxiety. Futhermore these findings support the hypothesis that high neuroticism and somewhat low extraversion might be vulnerability factors for MDD, and that high neuroticism and low extraversion predispose to comorbid axis I and II disorders among patients with MDD.
Resumo:
The purpose of this study was to estimate the prevalence and distribution of reduced visual acuity, major chronic eye diseases, and subsequent need for eye care services in the Finnish adult population comprising persons aged 30 years and older. In addition, we analyzed the effect of decreased vision on functioning and need for assistance using the World Health Organization’s (WHO) International Classification of Functioning, Disability, and Health (ICF) as a framework. The study was based on the Health 2000 health examination survey, a nationally representative population-based comprehensive survey of health and functional capacity carried out in 2000 to 2001 in Finland. The study sample representing the Finnish population aged 30 years and older was drawn by a two-stage stratified cluster sampling. The Health 2000 survey included a home interview and a comprehensive health examination conducted at a nearby screening center. If the invited participants did not attend, an abridged examination was conducted at home or in an institution. Based on our finding in participants, the great majority (96%) of Finnish adults had at least moderate visual acuity (VA ≥ 0.5) with current refraction correction, if any. However, in the age group 75–84 years the prevalence decreased to 81%, and after 85 years to 46%. In the population aged 30 years and older, the prevalence of habitual visual impairment (VA ≤ 0.25) was 1.6%, and 0.5% were blind (VA < 0.1). The prevalence of visual impairment increased significantly with age (p < 0.001), and after the age of 65 years the increase was sharp. Visual impairment was equally common for both sexes (OR 1.20, 95% CI 0.82 – 1.74). Based on self-reported and/or register-based data, the estimated total prevalences of cataract, glaucoma, age-related maculopathy (ARM), and diabetic retinopathy (DR) in the study population were 10%, 5%, 4%, and 1%, respectively. The prevalence of all of these chronic eye diseases increased with age (p < 0.001). Cataract and glaucoma were more common in women than in men (OR 1.55, 95% CI 1.26 – 1.91 and OR 1.57, 95% CI 1.24 – 1.98, respectively). The most prevalent eye diseases in people with visual impairment (VA ≤ 0.25) were ARM (37%), unoperated cataract (27%), glaucoma (22%), and DR (7%). One-half (58%) of visually impaired people had had a vision examination during the past five years, and 79% had received some vision rehabilitation services, mainly in the form of spectacles (70%). Only one-third (31%) had received formal low vision rehabilitation (i.e., fitting of low vision aids, receiving patient education, training for orientation and mobility, training for activities of daily living (ADL), or consultation with a social worker). People with low vision (VA 0.1 – 0.25) were less likely to have received formal low vision rehabilitation, magnifying glasses, or other low vision aids than blind people (VA < 0.1). Furthermore, low cognitive capacity and living in an institution were associated with limited use of vision rehabilitation services. Of the visually impaired living in the community, 71% reported a need for assistance and 24% had an unmet need for assistance in everyday activities. Prevalence of ADL, instrumental activities of daily living (IADL), and mobility increased with decreasing VA (p < 0.001). Visually impaired persons (VA ≤ 0.25) were four times more likely to have ADL disabilities than those with good VA (VA ≥ 0.8) after adjustment for sociodemographic and behavioral factors and chronic conditions (OR 4.36, 95% CI 2.44 – 7.78). Limitations in IADL and measured mobility were five times as likely (OR 4.82, 95% CI 2.38 – 9.76 and OR 5.37, 95% CI 2.44 – 7.78, respectively) and self-reported mobility limitations were three times as likely (OR 3.07, 95% CI 1.67 – 9.63) as in persons with good VA. The high prevalence of age-related eye diseases and subsequent visual impairment in the fastest growing segment of the population will result in a substantial increase in the demand for eye care services in the future. Many of the visually impaired, especially older persons with decreased cognitive capacity or living in an institution, have not had a recent vision examination and lack adequate low vision rehabilitation. This highlights the need for regular evaluation of visual function in the elderly and an active dissemination of information about rehabilitation services. Decreased VA is strongly associated with functional limitations, and even a slight decrease in VA was found to be associated with limited functioning. Thus, continuous efforts are needed to identify and treat eye diseases to maintain patients’ quality of life and to alleviate the social and economic burden of serious eye diseases.
Resumo:
The purpose of this study was to evaluate subjective food-related gastrointestinal symptoms and their relation to cow’s milk by determining the genotype of adult-type hypolactasia, measuring antibodies against milk protein, and screening the most common cause for secondary hypolactasia, namely coeliac disease. The whole study group comprised 1900 adults who gave a blood sample for the study when they attended a health care centre laboratory for various reasons. Of these 1885 (99%) completed a questionnaire on food-related gastrointestinal symptoms. Study No. I evaluated the prevalence of adult-type hypolactasia and its correlation to self-reported milk induced gastrointestinal symptoms. The testing for hypolactasia was done by determination of the C/T-13910 genotypes of the study subjects. The results show that patients with the C/C-13910 genotype associated with adult type hypolactasia consume less milk than those with C/T-13910 and T/T-13910 genotypes. Study No. II evaluated the prevalence and clinical characteristics of undiagnosed coeliac disease in the whole study population with transglutaminase and endomysium antibodies and their correlation with gastrointestinal symptoms. The prevalence of coeliac disease was 2 %, which is surprisingly high. Serum transglutaminase and endomysium antibodies are valuable tools for recognising an undiagnosed coeliac disease in outpatient clinics. In the study No. III the evaluation of milk protein IgE related hypersensitivity was carried out by stratifying all 756 study subjects with milk related problems and randomly choosing 100 age and sex matched controls with no such symptoms from the rest of the original study group. In the study No. IV 400 serum samples were randomly selected for analyzing milk protein related IgA and IgG antibodies and their correlation to milk related GI-symptoms. The measurement of milk protein IgA, IgE or IgG (studies No. III and IV) did not correlate clearly to milk induced symptoms and gave no clinically significant information; hence their measurement is not encouraged in outpatient clinics. In conclusion, adult type hypolactasia is often considered the reason for gastrointestinal symptoms in adults and determination of the C/T-13910 genotypes is a practical way of diagnosing adult type hypolactasia in an outpatient setting. Undiagnosed coeliac disease, should be actively screened and diagnosed in order to apply a gluten free diet and avoid the GI-symptoms and nutritional deficiencies. Cow’s milk hypersensitivity in the adult population is difficult to diagnose since the mechanism in which it is mediated is still unclear. Measuring of cow’s milk protein specific antibodies IgE, IgA or IgG do not correlate with subjective milk-related GI-symptoms.
Resumo:
This paper explores the obstacles associated with designing video game levels for the purpose of objectively measuring flow. We sought to create three video game levels capable of inducing a flow state, an overload state (low-flow), and a boredom state (low-flow). A pilot study, in which participants self-reported levels of flow after playing all three game levels, was undertaken. Unexpected results point to the challenges of operationalising flow in video game research, obstacles in experimental design for invoking flow and low-flow, concerns about flow as a construct for measuring video game enjoyment, the applicability of self-report flow scales, and the experience of flow in video game play despite substantial challenge-skill differences.
Resumo:
The study assessed whether plasma concentrations of complement factors C3, C4, or immunoglobulins, serum classical pathway hemolytyic activity, or polymorphisms in the class I and II HLA genes, isotypes and gene numbers of C4, or allotypes of IgG1 and IgG3 heavy chain genes were associated with severe frequently recurring or chronic mucosal infections. According to strict clinical criteria, 188 consecutive voluntary patients without a known immunodeficiency and 198 control subjects were recruited. Frequencies of low levels in IgG1, IgG2, IgG3 and IgG4 were for the first time tested from adult general population and patients with acute rhinosinusitis. Frequently recurring intraoral herpes simplex type 1 infections, a rare form of the disease, was associated with homozygosity in HLA -A*, -B*, -C*, and -DR* genes. Frequently recurrent genital HSV-2 infections were associated with low levels of IgG1 and IgG3, present in 54% of the recruited patients. This association was partly allotype-dependent. The G3mg,G1ma/ax haplotype, together with low IgG3, was more common in patients than in control subjects who lacked antibodies against herpes simplex viruses. This is the first found immunogenetic deficiency in otherwise healthy adults that predisposes to highly frequent mucosal herpes recurrences. According to previous studies, HSV effectively evades the allotype G1ma/ax of IgG1, whereas G3mg is associated with low IgG3. Certain HLA genes were more common in patients than in control subjects. Having more than one C4A or C4B gene was associated with neuralgias caused by the virus. Low levels of IgA, IgG1, IgG2, IgG3, and IgG4 were common in the general adult population, but even more frequent in patients with chronic sinusitis. Only low IgG1 was more common chronic than in acute rhinosinusitis. Clinically, nasal polyposis and bronchial asthma were associated with complicated disease forms. The best differentiating immunologic parameters were C4A deficiency and the combination of low plasma IgG4 together with low IgG1 or IgG2, performing almost equally. The lack of C4A, IgA, and IgG4, all known to possess anti-inflammatory activity, together with a concurrently impaired immunity caused by low subclass levels, may predispose to chronic disease forms. In severe chronic adult periodontitis, any C4A or C4B deficiency combined was associated with the disease. The new quantitative analysis of C4 genes and the conventional C4 allotyping method complemented each other. Lowered levels of plasma C3 or C4 or both, and serum CH50 were found in herpes and periodontitis patients. In rhinosinusitis, there was a linear trend with the highest levels found in the order: acute > chronic rhinosinusitis > general population > blood donors with no self-reported history of rhinosinusitis. Complement is involved in the defense against the tested mucosal infections. Seemingly immunocompetent patients with chronic or recurrent mucosal infections frequently have subtle weaknesses in different arms of immunity. Their susceptibility to chronic disease forms may be caused by these. Host s subtly impaired immunity often coincides with effective immune evasion from the same arms of immunity by the disease-causing pathogens. The interpretation of low subclass levels, if no additional predisposing immunologic factors are tested, is difficult and of limited value in early diagnosis and treatment.
Resumo:
Student participation in the classroom has long been regarded as an important means of increasing student engagement and enhancing learning outcomes by promoting active learning. However, the approach to class participation common in U.S. law schools, commonly referred to as the Socratic method, has been criticised for its negative impacts on student wellbeing. A multiplicity of American studies have identified that participating in law class discussions can be alienating, intimidating and stressful for some law students, and may be especially so for women, and students from minority backgrounds. Using data from the Law School Student Assessment Survey (LSSAS), conducted at UNSW Law School in 2012, this Chapter provides preliminary insights into whether assessable class participation (ACP) at an Australian law school is similarly alienating and stressful for students, including the groups identified in the American literature. In addition, we compare the responses of undergraduate Bachelor of Laws (LLB) and graduate Juris Doctor (JD) students. The LSSAS findings indicate that most respondents recognise the potential learning and social benefits associated with class participation in legal education, but remain divided over their willingness to participate. Further, in alignment with general trends identified in American studies, LLB students, women, international students, and non-native English speakers perceive they contribute less frequently to class discussions than JD students, males, domestic students, and native English speakers, respectively. Importantly, the LSSAS indicates students are more likely to be anxious about contributing to class discussions if they are LLB students (compared to their JD counterparts), and if English is not their first language (compared to native English speakers). There were no significant differences in students’ self-reported anxiety levels based on gender, which diverges from the findings of American research.
Resumo:
Over the past two decades, the selection, optimization, and compensation (SOC) model has been applied in the work context to investigate antecedents and outcomes of employees' use of action regulation strategies. We systematically review, meta-analyze, and critically discuss the literature on SOC strategy use at work and outline directions for future research and practice. The systematic review illustrates the breadth of constructs that have been studied in relation to SOC strategy use, and that SOC strategy use can mediate and moderate relationships of person and contextual antecedents with work outcomes. Results of the meta-analysis show that SOC strategy use is positively related to age (rc = .04), job autonomy (rc = .17), self-reported job performance (rc = .23), non-self-reported job performance (rc = .21), job satisfaction (rc = .25), and job engagement (rc = .38), whereas SOC strategy use is not significantly related to job tenure, job demands, and job strain. Overall, our findings underline the importance of the SOC model for the work context, and they also suggest that its measurement and reporting standards need to be improved to become a reliable guide for future research and organizational practice.
Resumo:
In this paper, we re-examine the relationship between overweight and labour market success, using indicators of individual body composition along with BMI (Body Mass Index). We use the dataset from Finland in which weight, height, fat mass and waist circumference are not self-reported, but obtained as part of the overall health examination. We find that waist circumference, but not weight or fat mass, has a negative effect on wages for women, whereas all measures of obesity have negative effects on women’s employment probabilities. For men, the only obesity measure that is significant for men’s employment probabilities is fat mass. One interpretation of our findings is that the negative wage effects of overweight on wages run through the discrimination channel, but that the negative effects of overweight on employment have more to do with ill health. All in all, measures of body composition provide a more refined picture about the effects of obesity on wages and employment.
Resumo:
This paper explores the relationship between the physical strenuousness of work and the body mass index in Finland, using individual microdata over the period 1972-2002. The data contain self-reported information about the physical strenuousness of a respondent’s current occupation. Our estimates show that the changes in the physical strenuousness of work can explain around 8% at most of the definite increase in BMI observed over the period. The main reason for this appears to be that the quantitative magnitude of the effect of the physical strenuousness of work on BMI is rather moderate. Hence, according to the point estimates, BMI is only around 1.5% lower when one’s current occupation is physically very demanding and involves lifting and carrying heavy objects compared with sedentary job (reference group of the estimations), other things being equal. Accordingly, the changes in eating habits and the amount of physical activity during leisure time must be the most important contributors to the upward trend in BMI in industrialised countries, but not the changes in the labour market structure.