778 resultados para Secondary Appraisal
Resumo:
OBJECTIVE: Acute mental stress elicits blood hypercoagulability. Following a transactional stress model, we investigated whether individuals who anticipate stress as more threatening, challenging, and as exceeding their coping skills show greater stress reactivity of the coagulation activation marker D-dimer, indicating fibrin generation in plasma. METHODS: Forty-seven men (mean age 44 +/- 14 years; mean blood pressure [MBP] 101 +/- 12 mm Hg; mean body mass index [BMI] 26 +/- 3 kg/m(2)) completed the Primary Appraisal Secondary Appraisal (PASA) scale before undergoing the Trier Social Stress Test (combination of mock job interview and mental arithmetic task). Heart rate, blood pressure, plasma catecholamines, and D-dimer levels were measured before and after stress, and during recovery up to 60 minutes poststress. RESULTS: Hemodynamic measures, catecholamines, and D-dimer changed across all time points (p values <.001). The PASA "Stress Index" (integrated measure of transactional stress perception) correlated with total D-dimer area under the curve (AUC) between rest and 60 minutes poststress (r = 0.30, p = .050) and with D-dimer change from rest to immediately poststress (r = 0.29, p = .046). Primary appraisal (combined "threat" and "challenge") correlated with total D-dimer AUC (r = 0.37, p = .017), D-dimer stress change (r = 0.41, p = .004), and D-dimer recovery (r = 0.32, p = .042). "Challenge" correlated more strongly with D-dimer stress change than "threat" (p = .020). Primary appraisal (DeltaR(2) = 0.098, beta = 0.37, p = .019), and particularly its subscale "challenge" (DeltaR(2) = 0.138, beta = 0.40, p = .005), predicted D-dimer stress change independently of age, BP, BMI, and catecholamine change. CONCLUSIONS: Anticipatory cognitive appraisal determined the extent of coagulation activation to and recovery from stress in men. Particularly individuals who anticipated the stressor as more challenging and also more threatening had a greater fibrin stress response.
Resumo:
Few organizational change studies identify the aspects of change that are salient to individuals and that influence well-being. The authors identified three distinct change characteristics: the frequency, impact and planning of change. R. S. Lazarus and S. Folkman's (1984) cognitive phenomenological model of stress and coping was used to propose ways that these change characteristics influence individuals' appraisal of the uncertainty associated with change, and, ultimately, job satisfaction and turnover intentions. Results of a repeated cross-sectional study that collected individuals' perceptions of change one month prior to employee attitudes in consecutive years indicated that while the three change perceptions were moderately to strongly intercorrelated, the change perceptions displayed differential relationships with outcomes. Discussion focuses on the importance of systematically considering individuals' subjective experience of change.
Resumo:
Based on the attributional reformulation of learned helplessness theory (Abramson, Seligman, & Teasdale, 1978) and Lazarus and Launier's (1978) primary-secondary appraisal theory of stress, the present study sought to examine teleworkers' reactions to their work-related problems. The role of attributions about the sources, and cognitions about the consesquences, of these problems in promoting positive adaptation was addressed. In particular, it was predicted that teleworkers who made optimistic attributions and cognitions would be more likely to employ problem-focused coping strategies and, as a result, report more positive psychological and job-related outcomes. Based on a survey sample of 192 teleworkers, the results indicated that a tendency to engage in self-blame was related to the use of emotion-focused coping strategies. In turn, there was evidence linking emotion-focused coping strategies to negative outcomes and problem-focused coping strategies to positive outcomes. The results are discussed in relation to attributional approaches to stress which highlight the importance of cognitions about the consequences of negative events. Finally, implications for the training of teleworkers are presented.
Resumo:
It has been well documented that the optimum feedstock for anaerobic digesters consists of readily biodegradable compounds, as found in primary sludge or even a mixed substrate of primary and excess activated sludge. Due to the requirements of the Urban Wastewater Treatment Plant Directive of 1991, the quantities of secondary sludge generated is set to increase substantially. A pilot scale study was undertaken to evaluate the performance of both Mesophilic Anaerobic Digestion and Thermophilic Aerobic digestion in the treatment of secondary sludge. The results indicated that the anaerobic pilot scale digester achieved a greater solids destruction than the aerobic pilot plant averaging at 28% T.S. removal verses 20% for the aerobic digester, despite the fact that secondary sludge is the optimum feedstock for aerobic digestion. This can, however, be attributed to the greater biomass yield experienced with aerobic systems, and to the absence of Autothermal conditions. At present, the traditional technique of Mesophilic Anaerobic Digestion is in widespread application throughout Ireland, for the stabilisation of sewage sludge. There is only one Autothermal Thermophilic Aerobic Digester at present situated in Killarney, Co. Kerry. A further objectives of the study was to compare full-scale applications of Mesophilic Anaerobic Digestion to ATAD. Two Sludge Treatment plants, situated in Co. Kerry, were used for this purpose, and were assessed mainly under the following headings; process stability, solids reduction on average, the ATAD plant in Killarney has the advantage of producing a “Class A” Biosolid in terms of pathogen reduction, and can effectively treat double the quantity of sludge. In addition, economically the ATAD plant is cheaper to run, costing €190 / t.d.s verses €211 / t.d.s. for the anaerobic digester in Tralee. An overview of additional operational Anaerobic Digestion Plants throughout Ireland is also presented.
Resumo:
BACKGROUND: During the last decade, the management of blunt hepatic injury has considerably changed. Three options are available as follows: nonoperative management (NOM), transarterial embolization (TAE), and surgery. We aimed to evaluate in a systematic review the current practice and outcomes in the management of Grade III to V blunt hepatic injury. METHOD: The MEDLINE database was searched using PubMed to identify English-language citations published after 2000 using the key words blunt, hepatic injury, severe, and grade III to V in different combinations. Liver injury was graded according to the American Association for the Surgery of Trauma classification on computed tomography (CT). Primary outcome analyzed was success rate in intention to treat. Critical appraisal of the literature was performed using the validated National Institute for Health and Care Excellence "Quality Assessment for Case Series" system. RESULTS: Twelve articles were selected for critical appraisal (n = 4,946 patients). The median quality score of articles was 4 of 8 (range, 2-6). Overall, the median Injury Severity Score (ISS) at admission was 26 (range, 0.6-75). A median of 66% (range, 0-100%) of patients was managed with NOM, with a success rate of 94% (range, 86-100%). TAE was used in only 3% of cases (range, 0-72%) owing to contrast extravasation on CT with a success rate of 93% (range, 81-100%); however, 9% to 30% of patients required a laparotomy. Thirty-one percent (range, 17-100%) of patients were managed with surgery owing to hemodynamic instability in most cases, with 12% to 28% requiring secondary TAE to control recurrent hepatic bleeding. Mortality was 5% (range, 0-8%) after NOM and 51% (range, 30-68%) after surgery. CONCLUSION: NOM of Grade III to V blunt hepatic injury is the first treatment option to manage hemodynamically stable patients. TAE and surgery are considered in a highly selective group of patients with contrast extravasation on CT or shock at admission, respectively. Additional standardization of the reports is necessary to allow accurate comparisons of the various management strategies. LEVEL OF EVIDENCE: Systematic review, level IV.
Resumo:
BACKGROUND: Pain is a common experience in later life. There is conflicting evidence of the prevalence, impact, and context of pain in older people. GPs are criticised for underestimating and under-treating pain. AIM: To assess the extent to which older people experience pain, and to explore relationships between self-reported pain and functional ability and depression. DESIGN OF STUDY: Secondary analysis of baseline data from a randomised controlled trial of health risk appraisal. SETTING: A total of 1090 community-dwelling non-disabled people aged 65 years and over were included in the study from three group practices in suburban London. METHOD: Main outcome measures were pain in the last 4 weeks and the impact of pain, measured using the 24-item Geriatric Pain Measure; depression symptoms captured using the 5-item Mental Health Inventory; social relationships measured using the 6-item Lubben Social Network Scale; Basic and Instrumental Activities of Daily Living and self-reported symptoms. RESULTS: Forty-five per cent of women and 34% of men reported pain in the previous 4 weeks. Pain experience appeared to be less in the 'oldest old': 27.5% of those aged 85 years and over reported pain compared with 38-53% of the 'younger old'. Those with arthritis were four times more likely to report pain. Pain had a profound impact on activities of daily living, but most of those reporting pain described their health as good or excellent. Although there was a significant association between the experience of pain and depressed mood, the majority of those reporting pain did not have depressed mood. CONCLUSION: A multidimensional approach to assessing pain is appropriate. Primary care practitioners should also assess the impact of pain on activities of daily living.
Resumo:
BACKGROUND: Social isolation is associated with poorer health, and is seen by the World Health Organisation (WHO) as one of the major issues facing the industrialised world. AIM: To explore the significance of social isolation in the older population for GPs and for service commissioners. DESIGN OF STUDY: Secondary analysis of baseline data from a randomised controlled trial of health risk appraisal. SETTING: A total of 2641 community-dwelling, non-disabled people aged 65 years and over in suburban London. METHOD: Demographic details, social network and risk for social isolation based on the 6-item Lubben Social Network Scale, measures of depressed mood, memory problems, numbers of chronic conditions, medication use, functional ability, self-reported use of medical services. RESULTS: More than 15% of the older age group were at risk of social isolation, and this risk increased with advancing age. In bivariate analyses risk of social isolation was associated with older age, education up to 16 years only, depressed mood and impaired memory, perceived fair or poor health, perceived difficulty with both basic and instrumental activities of daily living, diminishing functional ability, and fear of falling. Despite poorer health status, those at risk of social isolation did not appear to make greater use of medical services, nor were they at greater risk of hospital admission. Half of those who scored as at risk of social isolation lived with others. Multivariate analysis showed significant independent associations between risk of social isolation and depressed mood and living alone, and weak associations with male sex, impaired memory and perceived poor health. CONCLUSION: The risk of social isolation is elevated in older men, older persons who live alone, persons with mood or cognitive problems, but is not associated with greater use of services. These findings would not support population screening for individuals at risk of social isolation with a view to averting service use by timely intervention. Awareness of social isolation should trigger further assessment, and consideration of interventions to alleviate social isolation, treat depression or ameliorate cognitive impairment.
Resumo:
BACKGROUND: In the UK, population screening for unmet need has failed to improve the health of older people. Attention is turning to interventions targeted at 'at-risk' groups. Living alone in later life is seen as a potential health risk, and older people living alone are thought to be an at-risk group worthy of further intervention. AIM: To explore the clinical significance of living alone and the epidemiology of lone status as an at-risk category, by investigating associations between lone status and health behaviours, health status, and service use, in non-disabled older people. Design of study: Secondary analysis of baseline data from a randomised controlled trial of health risk appraisal in older people. SETTING: Four group practices in suburban London. METHOD: Sixty per cent of 2641 community-dwelling non-disabled people aged 65 years and over registered at a practice agreed to participate in the study; 84% of these returned completed questionnaires. A third of this group, (n = 860, 33.1%) lived alone and two-thirds (n = 1741, 66.9%) lived with someone else. RESULTS: Those living alone were more likely to report fair or poor health, poor vision, difficulties in instrumental and basic activities of daily living, worse memory and mood, lower physical activity, poorer diet, worsening function, risk of social isolation, hazardous alcohol use, having no emergency carer, and multiple falls in the previous 12 months. After adjustment for age, sex, income, and educational attainment, living alone remained associated with multiple falls, functional impairment, poor diet, smoking status, risk of social isolation, and three self-reported chronic conditions: arthritis and/or rheumatism, glaucoma, and cataracts. CONCLUSION: Clinicians working with independently-living older people living alone should anticipate higher levels of disease and disability in these patients, and higher health and social risks, much of which will be due to older age, lower educational status, and female sex. Living alone itself appears to be associated with higher risks of falling, and constellations of pathologies, including visual loss and joint disorders. Targeted population screening using lone status may be useful in identifying older individuals at high risk of falling.
Resumo:
BACKGROUND Although free eye testing is available in the UK from a nation-wide network of optometrists, there is evidence of unrecognised, tractable vision loss amongst older people. A recent review identified this unmet need as a priority for further investigation, highlighting the need to understand public perceptions of eye services and barriers to service access and utilisation. This paper aims to identify risk factors for (1) having poor vision and (2) not having had an eyesight check among community-dwelling older people without an established ophthalmological diagnosis. METHODS Secondary analysis of self-reported data from the ProAge trial. 1792 people without a known ophthalmological diagnosis were recruited from three group practices in London. RESULTS Almost two in ten people in this population of older individuals without known ophthalmological diagnoses had self-reported vision loss, and more than a third of them had not had an eye test in the previous twelve months. In this sample, those with limited education, depressed mood, need for help with instrumental and basic activities of daily living (IADLs and BADLs), and subjective memory complaints were at increased risk of fair or poor self-reported vision. Individuals with basic education only were at increased risk for not having had an eye test in the previous 12 months (OR 1.52, 95% CI 1.17-1.98 p=0.002), as were those with no, or only one chronic condition (OR 1.850, 95% CI 1.382-2.477, p<0.001). CONCLUSIONS Self-reported poor vision in older people without ophthalmological diagnoses is associated with other functional losses, with no or only one chronic condition, and with depression. This pattern of disorders may be the basis for case finding in general practice. Low educational attainment is an independent determinant of not having had eye tests, as well as a factor associated with undiagnosed vision loss. There are other factors, not identified in this study, which determine uptake of eye testing in those with self-reported vision loss. Further exploration is needed to identify these factors and lead towards effective case finding.
Resumo:
The national welfare state, so it seems, has come under attack by European integration. This article focuses on one facet of the welfare state, that is, health care and on one specific dimension, that is, cross-border movement of patients. The institution which has played a pivotal role in the development of the framework regulating the migration of patients is the European Court of Justice (ECJ). The Court’s activity in this sensitive area has not remained without critics. This was even more so since the Court invoked Treaty (primary) law which not only has made it difficult to overturn case law but also has left the legislator with very little room for manoeuvre in relation to any future (secondary) EU law. What is therefore of special interest in terms of legitimacy is the legal reasoning by which the Court has made its contribution to the development of this framework. This article is a re-appraisal of the legal development in this field.
Resumo:
Button battery ingestion is a frequent pediatric complaint. The serious complications resulting from accidental ingestion have increased significantly over the last two decades due to easy access to gadgets and electronic toys. Over recent years, the increasing use of lithium batteries of diameter 20 mm has brought new challenges, because these are more detrimental to the mucosa, compared with other types, with high morbidity and mortality. The clinical complaints, which are often nonspecific, may lead to delayed diagnosis, thereby increasing the risk of severe complications. A five-year-old boy who had been complaining of abdominal pain for ten days, was brought to the emergency service with a clinical condition of hematemesis that started two hours earlier. On admission, he presented pallor, tachycardia and hypotension. A plain abdominal x-ray produced an image suggestive of a button battery. Digestive endoscopy showed a deep ulcerated lesion in the esophagus without active bleeding. After this procedure, the patient presented profuse hematemesis and severe hypotension, followed by cardiorespiratory arrest, which was reversed. He then underwent emergency exploratory laparotomy and presented a new episode of cardiorespiratory arrest, which he did not survive. The battery was removed through rectal exploration. This case describes a fatal evolution of button battery ingestion with late diagnosis and severe associated injury of the digestive mucosa. A high level of clinical suspicion is essential for preventing this evolution. Preventive strategies are required, as well as health education, with warnings to parents, caregivers and healthcare professionals.
Resumo:
Most epidemiological studies concerning differentiated thyroid cancers (DTC) indicate an increasing incidence over the last two decades. This increase might be partially explained by the better access to health services worldwide, but clinicopathological analyses do not fully support this hypothesis, indicating that there are carcinogenetic factors behind this noticeable increasing incidence. Although we have undoubtedly understood the biology and molecular pathways underlying thyroid carcinogenesis in a better way, we have made very little progresses in identifying a risk profile for DTC, and our knowledge of risk factors is very similar to what we knew 30-40 years ago. In addition to ionizing radiation exposure, the most documented and established risk factor for DTC, we also investigated the role of other factors, including eating habits, tobacco smoking, living in a volcanic area, xenobiotics, and viruses, which could be involved in thyroid carcinogenesis, thus, contributing to the increase in DTC incidence rates observed.
Resumo:
Purified genomic DNA can be difficult to obtain from some plant species because of the presence of impurities such as polysaccharides, which are often co-extracted with DNA. In this study, we developed a fast, simple, and low-cost protocol for extracting DNA from plants containing high levels of secondary metabolites. This protocol does not require the use of volatile toxic reagents such as mercaptoethanol, chloroform, or phenol and allows the extraction of high-quality DNA from wild and cultivated tropical species.
Resumo:
Metastasizing pleomorphic adenoma (MPA) is a rare tumour, and its mechanism of metastasis still is unknown. To date, there has been no study on MPA genomics. We analysed primary and secondary MPAs with array comparative genomic hybridization to identify somatic copy number alterations and affected genes. Tumour DNA samples from primary (parotid salivary gland) and secondary (scalp skin) MPAs were subjected to array comparative genomic hybridization investigation, and the data were analysed with NEXUS COPY NUMBER DISCOVERY. The primary MPA showed copy number losses affecting 3p22.2p14.3 and 19p13.3p123, and a complex pattern of four different deletions at chromosome 6. The 3p deletion encompassed several genes: CTNNB1, SETD2, BAP1, and PBRM1, among others. The secondary MPA showed a genomic profile similar to that of the primary MPA, with acquisition of additional copy number changes affecting 9p24.3p13.1 (loss), 19q11q13.43 (gain), and 22q11.1q13.33 (gain). Our findings indicated a clonal origin of the secondary MPA, as both tumours shared a common profile of genomic copy number alterations. Furthermore, we were able to detect in the primary tumour a specific pattern of copy number alterations that could explain the metastasizing characteristic, whereas the secondary MPA showed a more unbalanced genome.
Resumo:
Secondary caries has been reported as the main reason for restoration replacement. The aim of this in vitro study was to evaluate the performance of different methods - visual inspection, laser fluorescence (DIAGNOdent), radiography and tactile examination - for secondary caries detection in primary molars restored with amalgam. Fifty-four primary molars were photographed and 73 suspect sites adjacent to amalgam restorations were selected. Two examiners evaluated independently these sites using all methods. Agreement between examiners was assessed by the Kappa test. To validate the methods, a caries-detector dye was used after restoration removal. The best cut-off points for the sample were found by a Receiver Operator Characteristic (ROC) analysis, and the area under the ROC curve (Az), and the sensitivity, specificity and accuracy of the methods were calculated for enamel (D2) and dentine (D3) thresholds. These parameters were found for each method and then compared by the McNemar test. The tactile examination and visual inspection presented the highest inter-examiner agreement for the D2 and D3 thresholds, respectively. The visual inspection also showed better performance than the other methods for both thresholds (Az = 0.861 and Az = 0.841, respectively). In conclusion, the visual inspection presented the best performance for detecting enamel and dentin secondary caries in primary teeth restored with amalgam.