865 resultados para Risk controlling strategies
Resumo:
This study analysed mechanisms through which stress-coping and temptation-coping strategies were associated with lapses. Furthermore, we explored whether distinct coping strategies differentially predicted reduced lapse risk, lower urge levels, or a weaker association between urge levels and lapses during the first week of an unassisted smoking cessation attempt. Participants were recruited via the internet and mass media in Switzerland. Ecological momentary assessment (EMA) with mobile devices was used to assess urge levels and lapses. Online questionnaires were used to measure smoking behaviours and coping variables at baseline, as well as smoking behaviour at the three-month follow-up. The sample consisted of 243 individuals, aged 20 to 40, who reported 4199 observations. Findings of multilevel regression analyses show that coping was mainly associated with a reduced lapse risk and not with lower urge levels or a weaker association between urge levels and lapses. 'Calming down' and 'commitment to change' predicted a lower lapse risk and also a weaker relation between urge levels and lapses. 'Stimulus control' predicted a lower lapse risk and lower urge levels. Conversely, 'task-orientation' and 'risk assessment' were related to higher lapse risk and 'risk assessment' also to higher urge levels. Disengagement coping i.e. 'eating or shopping', 'distraction', and 'mobilising social support' did not affect lapse risk. Promising coping strategies during the initial stage of smoking cessation attempt are targeted directly at reducing the lapse risk and are characterised by engagement with the stressor or one's reactions towards the stressor and a focus on positive consequences instead of health risks.
Resumo:
This longitudinal study investigated whether cybervictimisation is an additional risk factor for depressive symptoms over and beyond traditional victimisation in adolescents. Furthermore, it explored whether certain coping strategies moderate the impact of cybervictimisation on depressive symptoms. A total of 765 Swiss seventh graders (mean age at time-point 1 (t1) = 13.18 years) reported on the frequency of traditional and cybervictimisation, and of depressive symptoms twice in six months. At time-point 2 (t2) students also completed a questionnaire on coping strategies in response to a hypothetical cyberbullying scenario. Analyses showed that both traditional and cybervictimisation were associated with higher levels of depressive symptoms. Cybervictimisation also predicted increases in depressive symptoms over time. Regarding coping strategies, it was found that helpless reactions were positively associated with depressive symptoms. Moreover, support seeking from peers and family showed a significant buffering effect: cybervictims who recommended seeking close support showed lower levels of depressive symptoms at t2. In contrast, cybervictims recommending assertive coping strategies showed higher levels of depressive symptoms at t2.
Resumo:
Healthcare websites that are influential in healthcare decision-making must be evaluated for accuracy, readability and understandability by the average population. Most existing frameworks for designing and evaluating interactive websites focus on the utility and usability of the site. Although these are significant to the design of the basic site, they are not sufficient. We have developed an iterative framework that considers additional attributes.
Resumo:
BACKGROUND: : Women at increased risk of breast cancer (BC) are not widely accepting of chemopreventive interventions, and ethnic minorities are underrepresented in related trials. Furthermore, there is no validated instrument to assess the health-seeking behavior of these women with respect to these interventions. METHODS: : By using constructs from the Health Belief Model, the authors developed and refined, based on pilot data, the Breast Cancer Risk Reduction Health Belief (BCRRHB) scale using a population of 265 women at increased risk of BC who were largely medically underserved, of low socioeconomic status (SES), and ethnic minorities. Construct validity was assessed using principal components analysis with oblique rotation to extract factors, and generate and interpret summary scales. Internal consistency was determined using Cronbach alpha coefficients. RESULTS: : Test-retest reliability for the pilot and final data was calculated to be r = 0.85. Principal components analysis yielded 16 components that explained 64% of the total variance, with communalities ranging from 0.50-0.75. Cronbach alpha coefficients for the extracted factors ranged from 0.45-0.77. CONCLUSIONS: : Evidence suggests that the BCRRHB yields reliable and valid data that allows for the identification of barriers and enhancing factors associated with use of breast cancer chemoprevention in the study population. These findings allow for tailoring treatment plans and intervention strategies to the individual. Future research is needed to validate the scale for use in other female populations. Cancer 2009. (c) 2009 American Cancer Society.
Resumo:
OBJECTIVE: We sought to evaluate the performance of the human papillomavirus high-risk DNA test in patients 30 years and older. MATERIALS AND METHODS: Screening (n=835) and diagnosis (n=518) groups were defined based on prior Papanicolaou smear results as part of a clinical trial for cervical cancer detection. We compared the Hybrid Capture II (HCII) test result with the worst histologic report. We used cervical intraepithelial neoplasia (CIN) 2/3 or worse as the reference of disease. We calculated sensitivities, specificities, positive and negative likelihood ratios (LR+ and LR-), receiver operating characteristic (ROC) curves, and areas under the ROC curves for the HCII test. We also considered alternative strategies, including Papanicolaou smear, a combination of Papanicolaou smear and the HCII test, a sequence of Papanicolaou smear followed by the HCII test, and a sequence of the HCII test followed by Papanicolaou smear. RESULTS: For the screening group, the sensitivity was 0.69 and the specificity was 0.93; the area under the ROC curve was 0.81. The LR+ and LR- were 10.24 and 0.34, respectively. For the diagnosis group, the sensitivity was 0.88 and the specificity was 0.78; the area under the ROC curve was 0.83. The LR+ and LR- were 4.06 and 0.14, respectively. Sequential testing showed little or no improvement over the combination testing. CONCLUSIONS: The HCII test in the screening group had a greater LR+ for the detection of CIN 2/3 or worse. HCII testing may be an additional screening tool for cervical cancer in women 30 years and older.
Resumo:
This paper focuses on a project developed in Texas that utilizes community organizing strategies to advance childhood food security. With a dual focus on organizing policymakers and local communities, The Texas Hunger Initiative provides an example of an organizing project with the goal of ending childhood food insecurity in Texas.
Resumo:
The association of measures of physical activity with coronary heart disease (CHD) risk factors in children, especially those for atherosclerosis, is unknown. The purpose of this study was to determine the association of physical activity and cardiovascular fitness with blood lipids and lipoproteins in pre-adolescent and adolescent girls.^ The study population was comprised of 131 girls aged 9 to 16 years who participated in the Children's Nutrition Research Center's Adolescent Study. The dependent variables, blood lipids and lipoproteins, were measured by standard techniques. The independent variables were physical activity measured as the difference between total energy expenditure (TEE) and basal metabolic rate (BMR), and cardiovascular fitness, VO$\rm\sb{2max}$(ml/min/kg). TEE was measured by the doubly-labeled water (DLW) method, and BMR by whole-room calorimetry. Cardiovascular fitness, VO$\rm\sb{2max}$(ml/min/kg), was measured on a motorized treadmill. The potential confounding variables were sexual maturation (Tanner breast stage), ethnic group, body fat percent, and dietary variables. A systematic strategy for data analysis was used to isolate the effects of physical activity and cardiovascular fitness on blood lipids, beginning with assessment of confounding and interaction. Next, from regression models predicting each blood lipid and controlling for covariables, hypotheses were evaluated by the direction and value of the coefficients for physical activity and cardiovascular fitness.^ The main result was that cardiovascular fitness appeared to be more strongly associated with blood lipids than physical activity. An interaction between cardiovascular fitness and sexual maturation indicated that the effect of cardiovascular fitness on most blood lipids was dependent on the stage of sexual maturation.^ A difference of 760 kcal/d physical activity (which represents the difference between the 25th and 75th percentile of physical activity) was associated with negligible differences in blood lipids. In contrast, a difference in 10 ml/min/kg of VO$\rm\sb{2max}$ or cardiovascular fitness (which represents the difference between the 25th and 75th percentile in cardiovascular fitness) in the early stages of sexual maturation was associated with an average positive difference of 15 mg/100 ml ApoA-1 and 10 mg/100 ml HDL-C. ^
Resumo:
The neutral bis ((pivaloyloxy)methyl) (PIV$\sb2\rbrack$ derivatives of FdUMP, ddUMP, and AZTMP were synthesized as potential membrane-permeable prodrugs of FdUMP, ddUMP, and AZTMP. These compounds were designed to enter cells by passive diffusion and revert to the parent nucleotides after removal of the PIV groups by hydrolytic enzymes. These prodrugs were prepared by condensation of FUdR, ddU, and AZT with PIV$\sb2$ phosphate in the presence of triphenylphosphine and diethyl azodicarboxylate (the Mitsunobo reagent). PIV$\sb2$-FdUMP, PIV$\sb2$-ddUMP, and PIV$\sb2$-AZTMP were stable in the pH range 1.0-4.0 (t$\sb{1/2} = {>}$100 h). They were also fairly stable at pH 7.4 (t$\sb{1/2} = {>}$40 h). In 0.05 M NaOH solution, however, they were rapidly degraded (t$\sb{1/2} < 2$ min). In the presence hog liver carboxylate esterase, they were converted quantitatively to the corresponding phosphodiesters, PIV$\sb1$-FdUMP, PIV$\sb1$-ddUMP, and PIV$\sb1$-AZTMP; after 24 h incubation, only trace amounts of FdUMP, ddUMP, and AZTMP (1-5%) were observed indicating that the PIV$\sb1$ compounds were poor substrates for the enzyme. In human plasma, the PIV$\sb2$ compounds were rapidly degraded with half-lives of less than 5 min. The rate of degradation of the PIV$\sb2$ compounds in the presence of phosphodiesterase I was the same as that in buffer controls, indicating that they were not substrates for this enzyme. In the presence of phosphodiesterase I, PIV$\sb1$-FdUMP, PIV$\sb1$-ddUMP, and PIV$\sb1$-AZTMP were converted quantitatively to FdUMP, ddUMP, and AZTMP.^ PIV$\sb2$-ddUMP and PIV$\sb2$-AZTMP were effective at controlling HIV type 1 infection in MT-4 and CEM tk$\sp-$ cells in culture. Mechanistic studies demonstrated that PIV$\sb2$-ddUMP and PIV$\sb2$-AZTMP were taken up by the cells and converted to ddUTP and AZTTP, both potent inhibitors of HIV reverse transcriptase. However, a potential shortcoming of PIV$\sb2$-ddUMP and PIV$\sb2$-AZTMP as clinical therapeutic agents is that they are rapidly degraded (t$\sb{1/2}$ = approx. 4 minutes) in human plasma by carboxylate esterases. To circumvent this limitation, chemically-labile nucleotide prodrugs and liposome-encapsulated nucleotide prodrugs were investigated. In the former approach, the protective groups bis(N, N-(dimethyl)carbamoyloxymethyl) (DM$\sb2$) and bis (N-(piperidino)carbamoyloxymethyl) (DP$\sb2$) were used to synthesize DM$\sb2$-ddUMP and DP$\sb2$-ddUMP, respectively. In aqueous buffers (pH range 1.0-9.0) these compounds were degraded with half-lives of 3 to 4 h. They had similar half-lives in human plasma demonstrating that they were resistant to esterase-mediated cleavage. However, neither compound gave rise to significant concentrations of ddUMP in CEM or CEM tk$\sp-$ cells. In the liposome-encapsulated nucleotide prodrug approach, three different liposomal formulations of PIV$\sb2$-ddUMP (L-PIV$\sb2$-ddUMP) were investigated. The half-lifes of these L-PIV$\sb2$-ddUMP preparations in human plasma were 2 h compared with 4 min for the free drug. The preparations were more effective at controlling HIV-1 infection than free PIV$\sb2$-ddUMP in human T cells in culture. Collectively, these data indicate that PIV$\sb2$-FdUMP, PIV$\sb2$-ddUMP, and PIV$\sb2$-AZTMP are effective membrane-permeable prodrugs of FdUMP, ddUMP, and AZTMP. ^
Resumo:
Patients with first-episode psychosis (FEP) often show dysfunctional coping patterns, low self-efficacy, and external control beliefs that are considered to be risk factors for the development of psychosis. Therefore, these factors should already be present in patients at-risk for psychosis (AR). We compared frequencies of deficits in coping strategies (Stress-Coping-Questionnaires, SVF-120/SVF-KJ), self-efficacy, and control beliefs (Competence and Control Beliefs Questionnaire, FKK) between AR (n=21) and FEP (n=22) patients using a cross-sectional design. Correlations among coping, self-efficacy, and control beliefs were assessed in both groups. The majority of AR and FEP patients demonstrated deficits in coping skills, self-efficacy, and control beliefs. However, AR patients more frequently reported a lack of positive coping strategies, low self-efficacy, and a fatalistic externalizing bias. In contrast, FEP patients were characterized by being overly self-confident. These findings suggest that dysfunctional coping, self-efficacy, and control beliefs are already evident in AR patients, though different from those in FEP patients. The pattern of deficits in AR patients closely resembles that of depressive patients, which may reflect high levels of depressiveness in AR patients. Apart from being worthwhile treatment targets, these coping and belief patterns are promising candidates for predicting outcome in AR patients, including the conversion to psychosis
Resumo:
A growing body of evidence suggests a link between early childhood trauma, post-traumatic stress disorder (PTSD) and higher risk for dementia in old age. The aim of the present study was to investigate the association between childhood trauma exposure, PTSD and neurocognitive function in a unique cohort of former indentured Swiss child laborers in their late adulthood. To the best of our knowledge this is the first study ever conducted on former indentured child laborers and the first to investigate the relationship between childhood versus adulthood trauma and cognitive function. According to PTSD symptoms and whether they experienced childhood trauma (CT) or adulthood trauma (AT), participants (n = 96) were categorized as belonging to one of four groups: CT/PTSD+, CT/PTSD-, AT/PTSD+, AT/PTSD-. Information on cognitive function was assessed using the Structured Interview for Diagnosis of Dementia of Alzheimer Type, Multi-infarct Dementia and Dementia of other Etiology according to ICD-10 and DSM-III-R, the Mini-Mental State Examination, and a vocabulary test. Depressive symptoms were investigated as a potential mediator for neurocognitive functioning. Individuals screening positively for PTSD symptoms performed worse on all cognitive tasks compared to healthy individuals, independent of whether they reported childhood or adulthood adversity. When controlling for depressive symptoms, the relationship between PTSD symptoms and poor cognitive function became stronger. Overall, results tentatively indicate that PTSD is accompanied by cognitive deficits which appear to be independent of earlier childhood adversity. Our findings suggest that cognitive deficits in old age may be partly a consequence of PTSD or at least be aggravated by it. However, several study limitations need to considered. Consideration of cognitive deficits when treating PTSD patients and victims of lifespan trauma (even without a diagnosis of a psychiatric condition) is crucial. Furthermore, early intervention may prevent long-term deficits in memory function and development of dementia in adulthood.
Resumo:
Bovine mastitis is a frequent problem in Swiss dairy herds. One of the main pathogens causing significant economic loss is Staphylococcus aureus. Various Staph. aureus genotypes with different biological properties have been described. Genotype B (GTB) of Staph. aureus was identified as the most contagious and one of the most prevalent strains in Switzerland. The aim of this study was to identify risk factors associated with the herd-level presence of Staph. aureus GTB and Staph. aureus non-GTB in Swiss dairy herds with an elevated yield-corrected herd somatic cell count (YCHSCC). One hundred dairy herds with a mean YCHSCC between 200,000 and 300,000cells/mL in 2010 were recruited and each farm was visited once during milking. A standardized protocol investigating demography, mastitis management, cow husbandry, milking system, and milking routine was completed during the visit. A bulk tank milk (BTM) sample was analyzed by real-time PCR for the presence of Staph. aureus GTB to classify the herds into 2 groups: Staph. aureus GTB-positive and Staph. aureus GTB-negative. Moreover, quarter milk samples were aseptically collected for bacteriological culture from cows with a somatic cell count ≥150,000cells/mL on the last test-day before the visit. The culture results allowed us to allocate the Staph. aureus GTB-negative farms to Staph. aureus non-GTB and Staph. aureus-free groups. Multivariable multinomial logistic regression models were built to identify risk factors associated with the herd-level presence of Staph. aureus GTB and Staph. aureus non-GTB. The prevalence of Staph. aureus GTB herds was 16% (n=16), whereas that of Staph. aureus non-GTB herds was 38% (n=38). Herds that sent lactating cows to seasonal communal pastures had significantly higher odds of being infected with Staph. aureus GTB (odds ratio: 10.2, 95% CI: 1.9-56.6), compared with herds without communal pasturing. Herds that purchased heifers had significantly higher odds of being infected with Staph. aureus GTB (rather than Staph. aureus non-GTB) compared with herds without purchase of heifers. Furthermore, herds that did not use udder ointment as supportive therapy for acute mastitis had significantly higher odds of being infected with Staph. aureus GTB (odds ratio: 8.5, 95% CI: 1.6-58.4) or Staph. aureus non-GTB (odds ratio: 6.1, 95% CI: 1.3-27.8) than herds that used udder ointment occasionally or regularly. Herds in which the milker performed unrelated activities during milking had significantly higher odds of being infected with Staph. aureus GTB (rather than Staph. aureus non-GTB) compared with herds in which the milker did not perform unrelated activities at milking. Awareness of 4 potential risk factors identified in this study guides implementation of intervention strategies to improve udder health in both Staph. aureus GTB and Staph. aureus non-GTB herds.
Resumo:
This study aims to evaluate the potential for impacts of ocean acidification on North Atlantic deep-sea ecosystems in response to IPCC AR5 Representative Concentration Pathways (RCPs). Deep-sea biota is likely highly vulnerable to changes in seawater chemistry and sensitive to moderate excursions in pH. Here we show, from seven fully coupled Earth system models, that for three out of four RCPs over 17% of the seafloor area below 500 m depth in the North Atlantic sector will experience pH reductions exceeding −0.2 units by 2100. Increased stratification in response to climate change partially alleviates the impact of ocean acidification on deep benthic environments. We report on major pH reductions over the deep North Atlantic seafloor (depth >500 m) and at important deep-sea features, such as seamounts and canyons. By 2100, and under the high CO2 scenario RCP8.5, pH reductions exceeding −0.2 (−0.3) units are projected in close to 23% (~15%) of North Atlantic deep-sea canyons and ~8% (3%) of seamounts – including seamounts proposed as sites of marine protected areas. The spatial pattern of impacts reflects the depth of the pH perturbation and does not scale linearly with atmospheric CO2 concentration. Impacts may cause negative changes of the same magnitude or exceeding the current target of 10% of preservation of marine biomes set by the convention on biological diversity, implying that ocean acidification may offset benefits from conservation/management strategies relying on the regulation of resource exploitation.
Resumo:
A prerequisite for preventive measures is to diagnose erosive tooth wear and to evaluate the different etiological factors in order to identify persons at risk. No diagnostic device is available for the assessment of erosive defects. Thus, they can only be detected clinically. Consequently, erosion not diagnosed at an early stage may render timely preventive measures difficult. In order to assess the risk factors, patients should record their dietary intake for a distinct period of time. Then a dentist can determine the erosive potential of the diet. A table with common beverages and foodstuffs is presented for judging the erosive potential. Particularly, patients with more than 4 dietary acid intakes have a higher risk for erosion when other risk factors are present. Regurgitation of gastric acids is a further important risk factor for the development of erosion which has to be taken into account. Based on these analyses, an individually tailored preventive program may be suggested to the patients. It may comprise dietary advice, use of calcium-enriched beverages, optimization of prophylactic regimes, stimulation of salivary flow rate, use of buffering medicaments and particular motivation for nondestructive toothbrushing habits with an erosive-protecting toothpaste as well as rinsing solutions. Since erosion and abrasion often occur simultaneously, all of the causative components must be taken into consideration when planning preventive strategies but only those important and feasible for an individual should be communicated to the patient.
Resumo:
Due to significant improvement in the pre-hospital treatment of patients with out-of-hospital cardiac arrest (OHCA), an increasing number of initially resuscitated patients are being admitted to hospitals. Because of the limited data available and lack of clear guideline recommendations, experts from the EAPCI and "Stent for Life" (SFL) groups reviewed existing literature and provided practical guidelines on selection of patients for immediate coronary angiography (CAG), PCI strategy, concomitant antiplatelet/anticoagulation treatment, haemodynamic support and use of therapeutic hypothermia. Conscious survivors of OHCA with suspected acute coronary syndrome (ACS) should be treated according to recommendations for ST-segment elevation myocardial infarction (STEMI) and high-risk non-ST-segment elevation -ACS (NSTE-ACS) without OHCA and should undergo immediate (if STEMI) or rapid (less than two hours if NSTE-ACS) coronary invasive strategy. Comatose survivors of OHCA with ECG criteria for STEMI on the post-resuscitation ECG should be admitted directly to the catheterisation laboratory. For patients without STEMI ECG criteria, a short "emergency department or intensive care unit stop" is advised to exclude non-coronary causes. In the absence of an obvious non-coronary cause, CAG should be performed as soon as possible (less than two hours), in particular in haemodynamically unstable patients. Immediate PCI should be mainly directed towards the culprit lesion if identified. Interventional cardiologists should become an essential part of the "survival chain" for patients with OHCA. There is a need to centralise the care of patients with OHCA to experienced centres.
Resumo:
BACKGROUND The early repolarization (ER) pattern is associated with an increased risk of arrhythmogenic sudden death. However, strategies for risk stratification of patients with the ER pattern are not fully defined. OBJECTIVES This study sought to determine the role of electrophysiology studies (EPS) in risk stratification of patients with ER syndrome. METHODS In a multicenter study, 81 patients with ER syndrome (age 36 ± 13 years, 60 males) and aborted sudden death due to ventricular fibrillation (VF) were included. EPS were performed following the index VF episode using a standard protocol. Inducibility was defined by the provocation of sustained VF. Patients were followed up by serial implantable cardioverter-defibrillator interrogations. RESULTS Despite a recent history of aborted sudden death, VF was inducible in only 18 of 81 (22%) patients. During follow-up of 7.0 ± 4.9 years, 6 of 18 (33%) patients with inducible VF during EPS experienced VF recurrences, whereas 21 of 63 (33%) patients who were noninducible experienced recurrent VF (p = 0.93). VF storm occurred in 3 patients from the inducible VF group and in 4 patients in the noninducible group. VF inducibility was not associated with maximum J-wave amplitude (VF inducible vs. VF noninducible; 0.23 ± 0.11 mV vs. 0.21 ± 0.11 mV; p = 0.42) or J-wave distribution (inferior, odds ratio [OR]: 0.96 [95% confidence interval (CI): 0.33 to 2.81]; p = 0.95; lateral, OR: 1.57 [95% CI: 0.35 to 7.04]; p = 0.56; inferior and lateral, OR: 0.83 [95% CI: 0.27 to 2.55]; p = 0.74), which have previously been demonstrated to predict outcome in patients with an ER pattern. CONCLUSIONS Our findings indicate that current programmed stimulation protocols do not enhance risk stratification in ER syndrome.