893 resultados para high risk behavior
Resumo:
BACKGROUND AND PURPOSE: This is the first study investigating neoadjuvant interstitial high-dose-rate (HDR) brachytherapy combined with chemotherapy in patients with breast cancer. The goal was to evaluate the type of surgical treatment, histopathologic response, side effects, local control, and survival. PATIENTS AND METHODS: 53 patients, who could not be treated with breast-conserving surgery due to initial tumor size (36/53) or due to an unfavorable breast-tumor ratio (17/53), were analyzed retrospectively. All but one were in an intermediate/high-risk group (St. Gallen criteria). The patients received a neoadjuvant protocol consisting of systemic chemotherapy combined with fractionated HDR brachytherapy (2 x 5 Gy/day, total dose 30 Gy). In cases, where breast-conserving surgery was performed, patients received additional external-beam radiotherapy (EBRT, 1.8 Gy/day, total dose 50.4 Gy). In patients, who underwent mastectomy but showed an initial tumor size of T3/T4 and/or more than three infiltrated lymph nodes, EBRT was also performed. RESULTS: In 30/53 patients (56.6%) breast-conserving surgery could be performed. The overall histopathologic response rate was 96.2% with a complete remission in 28.3% of patients. 49/53 patients were evaluable for follow-up. After a median of 58 months (45-72 months), one patient showed a mild fibrosis of the breast tissue, three patients had mild to moderate lymphatic edema of the arm. 6/49 (12.2%) patients died of distant metastases, 4/49 (8.2%) were alive with disease, and 39/49 (79.6%) were free from disease. Local recurrence was observed in only one case (2%) 40 months after primary therapy. After mastectomy, this patient is currently free from disease. CONCLUSION: The combination of interstitial HDR brachytherapy and chemotherapy is a well-tolerated and effective neoadjuvant treatment in patients with breast cancer. Compared to EBRT, treatment time is short. Postoperative EBRT of the whole breast -- if necessary -- is still possible after neoadjuvant brachytherapy. Even though the number of patients does not permit definite conclusions, the results are promising regarding survival and the very low rate of local recurrences.
Resumo:
BACKGROUND: In the UK, population screening for unmet need has failed to improve the health of older people. Attention is turning to interventions targeted at 'at-risk' groups. Living alone in later life is seen as a potential health risk, and older people living alone are thought to be an at-risk group worthy of further intervention. AIM: To explore the clinical significance of living alone and the epidemiology of lone status as an at-risk category, by investigating associations between lone status and health behaviours, health status, and service use, in non-disabled older people. Design of study: Secondary analysis of baseline data from a randomised controlled trial of health risk appraisal in older people. SETTING: Four group practices in suburban London. METHOD: Sixty per cent of 2641 community-dwelling non-disabled people aged 65 years and over registered at a practice agreed to participate in the study; 84% of these returned completed questionnaires. A third of this group, (n = 860, 33.1%) lived alone and two-thirds (n = 1741, 66.9%) lived with someone else. RESULTS: Those living alone were more likely to report fair or poor health, poor vision, difficulties in instrumental and basic activities of daily living, worse memory and mood, lower physical activity, poorer diet, worsening function, risk of social isolation, hazardous alcohol use, having no emergency carer, and multiple falls in the previous 12 months. After adjustment for age, sex, income, and educational attainment, living alone remained associated with multiple falls, functional impairment, poor diet, smoking status, risk of social isolation, and three self-reported chronic conditions: arthritis and/or rheumatism, glaucoma, and cataracts. CONCLUSION: Clinicians working with independently-living older people living alone should anticipate higher levels of disease and disability in these patients, and higher health and social risks, much of which will be due to older age, lower educational status, and female sex. Living alone itself appears to be associated with higher risks of falling, and constellations of pathologies, including visual loss and joint disorders. Targeted population screening using lone status may be useful in identifying older individuals at high risk of falling.
Resumo:
BACKGROUND: Early catheter-related infection is a serious complication in cancer treatment, although risk factors for its occurrence are not well established. The authors conducted a prospective study to identify the risk factors for developing early catheter-related infection. METHODS: All consecutive patients with cancer who underwent insertion of a central venous catheter were enrolled and were followed prospectively during 1 month. The study endpoint was occurrence of early catheter-related infection. RESULTS: Over 10,392 catheter-days of follow-up, 14 of 371 patients had early catheter-related infections (14 patients in 10,392 catheter-days or 1.34 per 1000 catheter-days). The causative pathogens were gram positive in 11 of 14 patients. In univariate analysis, the risk factors for early catheter-related infection were aged <10 years (P = .0001), difficulties during insertion (P < 10(-6)), blood product administration (P < 10(-3)), parenteral nutrition (P < 10(-4)), and use >2 days (P < 10(-6)). In multivariate analysis, 3 variables remained significantly associated with the risk of early catheter-related infection: age <10 years (odds ratio [OR], 18.4; 95% confidence interval [95% CI], 1.9-106.7), difficulties during insertion procedure (OR, 25.6; 95% CI, 4.2-106), and parenteral nutrition (OR, 28.5; 95% CI, 4.2-200). CONCLUSIONS: On the day of insertion, 2 variables were identified that were associated with a high risk of developing an early catheter-related infection: young age and difficulties during insertion. The results from this study may be used to identify patients who are at high risk of infection who may be candidates for preventive strategies.
Resumo:
BACKGROUND: Single-nucleotide polymorphisms in genes involved in lipoprotein and adipocyte metabolism may explain why dyslipidemia and lipoatrophy occur in some but not all antiretroviral therapy (ART)-treated individuals. METHODS: We evaluated the contribution of APOC3 -482C-->T, -455T-->C, and 3238C-->G; epsilon 2 and epsilon 4 alleles of APOE; and TNF -238G-->A to dyslipidemia and lipoatrophy by longitudinally modeling >2600 lipid determinations and 2328 lipoatrophy assessments in 329 ART-treated patients during a median follow-up period of 3.4 years. RESULTS: In human immunodeficiency virus (HIV)-infected individuals, the effects of variant alleles of APOE on plasma cholesterol and triglyceride levels and of APOC3 on plasma triglyceride levels were comparable to those reported in the general population. However, when treated with ritonavir, individuals with unfavorable genotypes of APOC3 and [corrected] APOE were at risk of extreme hypertriglyceridemia. They had median plasma triglyceride levels of 7.33 mmol/L, compared with 3.08 mmol/L in the absence of ART. The net effect of the APOE*APOC3*ritonavir interaction was an increase in plasma triglyceride levels of 2.23 mmol/L. No association between TNF -238G-->A and lipoatrophy was observed. CONCLUSIONS: Variant alleles of APOE and APOC3 contribute to an unfavorable lipid profile in patients with HIV. Interactions between genotypes and ART can lead to severe hyperlipidemia. Genetic analysis may identify patients at high risk for severe ritonavir-associated hypertriglyceridemia.
Resumo:
BACKGROUND: Transcatheter aortic valve implantation (TAVI) for high-risk and inoperable patients with severe aortic stenosis is an emerging procedure in cardiovascular medicine. Little is known of the impact of TAVI on renal function. METHODS: We analysed retrospectively renal baseline characteristics and outcome in 58 patients including 2 patients on chronic haemodialysis undergoing TAVI at our institution. Acute kidney injury (AKI) was defined according to the RIFLE classification. RESULTS: Fifty-eight patients with severe symptomatic aortic stenosis not considered suitable for conventional surgical valve replacement with a mean age of 83 +/- 5 years underwent TAVI. Two patients died during transfemoral valve implantation and two patients in the first month after TAVI resulting in a 30-day mortality of 6.9%. Vascular access was transfemoral in 46 patients and transapical in 12. Estimated glomerular filtration rate (eGFR) increased in 30 patients (56%). Fifteen patients (28%) developed AKI, of which four patients had to be dialyzed temporarily and one remained on chronic renal replacement therapy. Risk factors for AKI comprised, among others, transapical access, number of blood transfusions, postinterventional thrombocytopaenia and severe inflammatory response syndrome (SIRS). CONCLUSIONS: TAVI is feasible in patients with a high burden of comorbidities and in patients with pre-existing end-stage renal disease who would be otherwise not considered as candidates for conventional aortic valve replacement. Although GFR improved in more than half of the patients, this benefit was associated with a risk of postinterventional AKI. Future investigations should define preventive measures of peri-procedural kidney injury.
Resumo:
BACKGROUND: A complete remission is essential for prolonging survival in patients with acute myeloid leukemia (AML). Daunorubicin is a cornerstone of the induction regimen, but the optimal dose is unknown. In older patients, it is usual to give daunorubicin at a dose of 45 to 50 mg per square meter of body-surface area. METHODS: Patients in whom AML or high-risk refractory anemia had been newly diagnosed and who were 60 to 83 years of age (median, 67) were randomly assigned to receive cytarabine, at a dose of 200 mg per square meter by continuous infusion for 7 days, plus daunorubicin for 3 days, either at the conventional dose of 45 mg per square meter (411 patients) or at an escalated dose of 90 mg per square meter (402 patients); this treatment was followed by a second cycle of cytarabine at a dose of 1000 mg per square meter every 12 hours [DOSAGE ERROR CORRECTED] for 6 days. The primary end point was event-free survival. RESULTS: The complete remission rates were 64% in the group that received the escalated dose of daunorubicin and 54% in the group that received the conventional dose (P=0.002); the rates of remission after the first cycle of induction treatment were 52% and 35%, respectively (P<0.001). There was no significant difference between the two groups in the incidence of hematologic toxic effects, 30-day mortality (11% and 12% in the two groups, respectively), or the incidence of moderate, severe, or life-threatening adverse events (P=0.08). Survival end points in the two groups did not differ significantly overall, but patients in the escalated-treatment group who were 60 to 65 years of age, as compared with the patients in the same age group who received the conventional dose, had higher rates of complete remission (73% vs. 51%), event-free survival (29% vs. 14%), and overall survival (38% vs. 23%). CONCLUSIONS: In patients with AML who are older than 60 years of age, escalation of the dose of daunorubicin to twice the conventional dose, with the entire dose administered in the first induction cycle, effects a more rapid response and a higher response rate than does the conventional dose, without additional toxic effects. (Current Controlled Trials number, ISRCTN77039377; and Netherlands National Trial Register number, NTR212.)
Resumo:
Objective: Suicide attempts are common in patients being treated for alcohol-use disorders (AUDs). However, clinical assessment of suicide risk is difficult. In this Swiss multisite study, we propose a decision tree to facilitate identification of profiles of AUD patients at high risk for suicidal behavior. Method: In this retrospective study, we used a sample of 700 patients (243 female), attending 1 of 12 treatment programs for AUDs in the German-speaking part of Switzerland. Sixty-nine patients who reported a suicide attempt in the 3 months before the index treatment were compared using risk factors with 631 patients without a suicide attempt. Receiver operating characteristic (ROC) analyses were used to identify patients at risk of having had a suicide attempt in the previous 3 months. Results: Consistent with previous empirical findings in AUD patients, a prior history of attempted suicide and severe symptoms of depression and aggression considerably increased the risk of a suicide attempt and, in combination, raised the likelihood of a prior suicide attempt to 52%. In addition, one third of AUD patients who had a history of suicide attempts and previous inpatient psychiatric treatment, or who were male and had previous inpatient psychiatric treatment, also reported a suicide attempt. Conclusions: The empirically supported decision tree helps to identify profiles of suicidal AUD patients in Switzerland and supplements clinicians' judgments in making triage decisions for suicide management.
Resumo:
Preclinical disorders of glucose metabolism should be systematically included in the high-risk group for diabetes mellitus and affected individuals provided with preventive measures. Their underlying insulin resistance is determined with the help of a checklist and a method called homeostasis model assessment (HOMA). Patients with impaired fasting glucose (IFG) must change their lifestyles. If this does not lead to a response or the patient is unable to modify behavior, medication is required. In the case of manifest type 2 diabetes mellitus, a graded schedule is used for differential management, which should be based on nutritional and exercise therapy. Oral medication with metformin is probably the drug of choice in both obese and non-obese patients. It is crucial not to delay raising the level of treatment until HbA1c has fallen to within an unsatisfactory range (wait-and-see strategy). Rather, the level should be intensified when persistent exacerbation starts to become apparent (proactive therapy). In diabetes mellitus, the same guidelines for secondary prevention apply to the associated cardiovascular risk factors as with coronary heart disease. An intensified and, especially, early treatment is to be preferred over a conservative, wait-and-see approach, in this case as well.
Resumo:
BACKGROUND The role of surgery for patients with metastatic esophagogastric adenocarcinoma (EGC) is not defined. The purpose of this study was to define selection criteria for patients who may benefit from resection following systemic chemotherapy. METHODS From 1987 to 2007, 160 patients presenting with synchronous metastatic EGC (cT3/4 cNany cM0/1 finally pM1) were treated with chemotherapy followed by resection of the primary tumor and metastases. Clinical and histopathological data, site and number of metastases were analyzed. A prognostic score was established and validated in a second cohort from another academic center (n = 32). RESULTS The median survival (MS) in cohort 1 was 13.6 months. Significant prognostic factors were grading (p = 0.046), ypT- (p = 0.001), ypN- (p = 0.011) and R-category (p = 0.015), lymphangiosis (p = 0.021), clinical (p = 0.004) and histopathological response (p = 0.006), but not localization or number of metastases. The addition of grading (G1/2:0 points; G3/4:1 points), clinical response (responder: 0; nonresponder: 1) and R-category (complete:0; R1:1; R2:2) defines two groups of patients with significantly different survival (p = 0.001) [low risk group (Score 0/1), n = 22: MS 35.3 months, 3-year-survival 47.6%); high risk group (Score 2/3/4) n = 126: MS 12.0 months, 3-year-survival 14.2%]. The score showed a strong trend in the validation cohort (p = 0.063) [low risk group (MS not reached, 3-year-survival 57.1%); high risk group (MS 19.9 months, 3-year-survival 6.7%)]. CONCLUSION We observed long-term survival after resection of metastatic EGC. A simple clinical score may help to identify a subgroup of patients with a high chance of benefit from resection. However, the accurate estimation of achieving a complete resection, which is an integral element of the score, remains challenging.
Resumo:
Mood disorders are the most common form of mental illness and one of the leading causes of morbidity worldwide. Major depressive disorder and bipolar disorder have a lifetime prevalence of 16.2% and 4.4%, respectively. Women comprise a substantial proportion of this population, and an estimated 500,000 pregnancies each year involve women with a psychiatric condition. Management with psychotropic medications is considered standard of care for most patients with mood disorders. However, many of these medications are known human teratogens. Because pregnant women with mood disorders face a high risk of relapse if unmanaged, the obstetrician faces a unique challenge in providing the best care to both mother and baby. It has been suggested that many obstetricians overestimate the teratogenic risks associated with psychotropic medications, while concurrently underestimating the risks associated with unmanaged mood disorders. This may be due a knowledge gap regarding the most current teratogen information, and lack of official management guidelines. Therefore, the purpose of this study is to determine the current knowledge base of obstetricians regarding the teratogenic effects of common psychotropic medications, as wells as to capture current management practices for pregnant women with mood disorders. A total of 117 Texas obstetricians responded to a survey regarding teratogen knowledge and management practice. It was common for respondents to encounter women who disclose both having a mood disorder and taking a psychotropic medication during pregnancy. Many respondents did not utilize up-to-date drug counseling resources, and were unaware of or over-estimated the teratogenic risks of common medications used to treat mood disorders. Finally, many respondents reported wanting to refer pregnant patients with mood disorders to psychiatrists for co-management, but are reportedly restricted in doing so due to accessibility or insurance issues. This study demonstrates that there is a knowledge gap among obstetricians regarding the teratogenicity of common psychotropic medications utilized to manage a patient population they frequently encounter. Further, obstetricians have vastly different risk perceptions of these medications, resulting in various management approaches and recommendations. Future research should focus on establishing standard practice guidelines, as well as better accessibility to psychiatric services for pregnant women.
Resumo:
The objective of this longitudinal study, conducted in a neonatal intensive care unit, was to characterize the response to pain of high-risk very low birth weight infants (<1,500 g) from 23 to 38 weeks post-menstrual age (PMA) by measuring heart rate variability (HRV). Heart period data were recorded before, during, and after a heel lanced or wrist venipunctured blood draw for routine clinical evaluation. Pain response to the blood draw procedure and age-related changes of HRV in low-frequency and high-frequency bands were modeled with linear mixed-effects models. HRV in both bands decreased during pain, followed by a recovery to near-baseline levels. Venipuncture and mechanical ventilation were factors that attenuated the HRV response to pain. HRV at the baseline increased with post-menstrual age but the growth rate of high-frequency power was reduced in mechanically ventilated infants. There was some evidence that low-frequency HRV response to pain improved with advancing PMA.
Resumo:
Objectives. Cardiovascular disease (CVD) including CVD secondary to diabetes type II, a significant health problem among Mexican American populations, originates in early childhood. This study seeks to determine risk factors available to the health practitioner that can identify the child at potential risk of developing CVD, thereby enabling early intervention. ^ Design. This is a secondary analysis of cross-sectional data of matched Mexican American parents and children selected from the HHANES, 1982–1984. ^ Methods. Parents at high risk for CVD were identified based on medical history, and clinical and physical findings. Factor analysis was performed on children's skinfold thicknesses, height, weight, and systolic and diastolic blood pressures, in order to produce a limited number of uncorrelated child CVD risk factors. Multiple regression analyses were then performed to determine other CVD markers associated with these Factors, independently for mothers and fathers. ^ Results. Factor analysis of children's measurements revealed three uncorrelated latent variables summarizing the children's CVD risk: Factor1: ‘Fatness’, Factor2: ‘Size and Maturity’, and Factor3: ‘Blood Pressure’, together accounting for the bulk of variation in children's measurements (86–89%). Univariate analyses showed that children from high CVD risk families did not differ from children of low risk families in occurrence of high blood pressure, overweight, biological maturity, acculturation score, or social and economic indicators. However, multiple regression using the factor scores (from factor analysis) as dependent variables, revealed that higher CVD risk in parents, was significantly associated with increased fatness and increased blood pressure in the children. Father's CVD risk status was associated with higher levels of body fat in his children and higher levels of blood pressure in sons. Mother's CVD risk status was associated with higher blood pressure levels in children, and occurrence of obesity in the mother associated with higher fatness levels in her children. ^ Conclusion. Occurrence of cardiovascular disease and its risk factors in parents of Mexican American children, may be used to identify children at potentially higher risk for developing CV disease in the future. Obesity in mothers appears to be an important marker for the development of higher levels of body fatness in children. ^
Resumo:
This dissertation addresses the risk of lung cancer associated with occupational exposures in the petroleum refining and petrochemical industries. Earlier epidemiologic studies of this association did not adjust for cigarette smoking or have specific exposure classifications. The Texas EXposure Assessment System (TEXAS) was developed with data from a population-based, case-comparison study conducted in five southeast Texas counties between 1976 and 1980. The Texas Exposure Assessment System uses job and process categories developed by the American Petroleum Institute, as well as time-oriented variables to identify high risk groups.^ An industry-wide, increased risk for lung cancer was associated with jobs having low-level hydrocarbon exposure that also include other occupational inhalation exposures (OR = 2.0--adjusted for smoking and latency effects). The prohibition of cigarette smoking for jobs with high-level hydrocarbon exposure might explain part of the increased risk for jobs with low-level hydrocarbon exposures. Asbestos exposure comprises a large part of the risk associated with jobs having other inhalation exposures besides hydrocarbons. Workers in petroleum refineries were not shown to have an increased, occupational risk for lung cancer. The increased risk for lung cancer among petrochemical workers (OR = 3.1--smoking and latency adjusted) is associated with all jobs that involve other inhalation exposure characteristics (not only low-level hydrocarbon exposures). Findings for contract workers and workers exposed to specific chemicals were inconclusive although some hypotheses for future research were identified.^ The study results demonstrate that the predominant risk for lung cancer is due to cigarette smoking (OR = 9.8). Cigarette smoking accounts for 86.5% of the incident lung cancer cases within the study area. Workers in the petroleum industry smoke significantly less than persons employed in other industries (p << 0.001). Only 2.2% of the incident lung cancer cases may be attributed to petroleum industry jobs; lifestyle factors (e.g., nutrition) may be associated with the balance of the cases. The results from this study also suggest possible high risk time periods (OR = 3.9--smoking and occupation adjusted). Artifacts in time-oriented findings may result because of the latency interval for lung cancer, secular peaks in age-, sex-specific incidence rates, or periods of hazardous exposures in the petroleum industry. ^
Resumo:
BACKGROUND Conventional factors do not fully explain the distribution of cardiovascular outcomes. Biomarkers are known to participate in well-established pathways associated with cardiovascular disease, and may therefore provide further information over and above conventional risk factors. This study sought to determine whether individual and/or combined assessment of 9 biomarkers improved discrimination, calibration and reclassification of cardiovascular mortality. METHODS 3267 patients (2283 men), aged 18-95 years, at intermediate-to-high-risk of cardiovascular disease were followed in this prospective cohort study. Conventional risk factors and biomarkers were included based on forward and backward Cox proportional stepwise selection models. RESULTS During 10-years of follow-up, 546 fatal cardiovascular events occurred. Four biomarkers (interleukin-6, neutrophils, von Willebrand factor, and 25-hydroxyvitamin D) were retained during stepwise selection procedures for subsequent analyses. Simultaneous inclusion of these biomarkers significantly improved discrimination as measured by the C-index (0.78, P = 0.0001), and integrated discrimination improvement (0.0219, P<0.0001). Collectively, these biomarkers improved net reclassification for cardiovascular death by 10.6% (P<0.0001) when added to the conventional risk model. CONCLUSIONS In terms of adverse cardiovascular prognosis, a biomarker panel consisting of interleukin-6, neutrophils, von Willebrand factor, and 25-hydroxyvitamin D offered significant incremental value beyond that conveyed by simple conventional risk factors.
Resumo:
There is a need to validate risk assessment tools for hospitalised medical patients at risk of venous thromboembolism (VTE). We investigated whether a predefined cut-off of the Geneva Risk Score, as compared to the Padua Prediction Score, accurately distinguishes low-risk from high-risk patients regardless of the use of thromboprophylaxis. In the multicentre, prospective Explicit ASsessment of Thromboembolic RIsk and Prophylaxis for Medical PATients in SwitzErland (ESTIMATE) cohort study, 1,478 hospitalised medical patients were enrolled of whom 637 (43%) did not receive thromboprophylaxis. The primary endpoint was symptomatic VTE or VTE-related death at 90 days. The study is registered at ClinicalTrials.gov, number NCT01277536. According to the Geneva Risk Score, the cumulative rate of the primary endpoint was 3.2% (95% confidence interval [CI] 2.2-4.6%) in 962 high-risk vs 0.6% (95% CI 0.2-1.9%) in 516 low-risk patients (p=0.002); among patients without prophylaxis, this rate was 3.5% vs 0.8% (p=0.029), respectively. In comparison, the Padua Prediction Score yielded a cumulative rate of the primary endpoint of 3.5% (95% CI 2.3-5.3%) in 714 high-risk vs 1.1% (95% CI 0.6-2.3%) in 764 low-risk patients (p=0.002); among patients without prophylaxis, this rate was 3.2% vs 1.5% (p=0.130), respectively. Negative likelihood ratio was 0.28 (95% CI 0.10-0.83) for the Geneva Risk Score and 0.51 (95% CI 0.28-0.93) for the Padua Prediction Score. In conclusion, among hospitalised medical patients, the Geneva Risk Score predicted VTE and VTE-related mortality and compared favourably with the Padua Prediction Score, particularly for its accuracy to identify low-risk patients who do not require thromboprophylaxis.