970 resultados para Biology, Microbiology|Health Sciences, Medicine and Surgery
Resumo:
Retinal detachment is a common ophthalmologic procedure, and outcome is typically measured by a single factor-improvement in visual acuity. Health related functional outcome testing, which quantifies patient's self-reported perception of impairment, can be integrated with objective clinical findings. Based on the patient's self-assessed lifestyle impairment, the physician and patient together can make an informed decision on the treatment that is most likely to benefit the patient. ^ A functional outcome test (the Houston Vision Assessment Test-Retina; HVAT-Retina) was developed and validated in patients with multiple retinal detachments in the same eye. The HVAT-Retina divides an estimated total impairment into subcomponents: contribution of visual disability (potentially correctable by retinal detachment surgery) and nonvisual physical disabilities (co-morbidities not affected by retinal detachment surgery. ^ Seventy-six patients participated in this prospective multicenter study. Seven patients were excluded from the analysis because they were not certain of their answers. Cronbach's alpha coefficient was 0.91 for presurgery HVAT-Retina and 0.94 post-surgery. The item-to-total correlation ranged from 0.50 to 0.88. Visual impairment score improved by 9 points from pre-surgery (p = 0.0003). Physical impairment score also improved from pre-surgery (p = 0.0002). ^ In conclusion, the results of this study demonstrate that the instrument is reliable and valid in patients presenting with recurrent retinal detachments. The HVAT-Retina is a simple instrument and does not burden the patient or the health professional in terms of time or cost. It may be self-administrated, not requiring an interviewer. Because the HVAT-Retina was designed to demonstrate outcomes perceivable by the patient, it has the potential to guide the decision making process between patient and physician. ^
Resumo:
The discoveries of the BRCA1 and BRCA2 genes have made it possible for women of families with hereditary breast/ovarian cancer to determine if they carry cancer-predisposing genetic mutations. Women with germline mutations have significantly higher probabilities of developing both cancers than the general population. Since the presence of a BRCA1 or BRCA2 mutation does not guarantee future cancer development, the appropriate course of action remains uncertain for these women. Prophylactic mastectomy and oophorectomy remain controversial since the underlying premise for surgical intervention is based more upon reduction in the estimated risk of cancer than on actual evidence of clinical benefit. Issues that are incorporated in a woman's decision making process include quality of life without breasts, ovaries, attitudes toward possible surgical morbidity as well as a remaining risk of future development of breast/ovarian cancer despite prophylactic surgery. The incorporation of patient preferences into decision analysis models can determine the quality-adjusted survival of different prophylactic approaches to breast/ovarian cancer prevention. Monte Carlo simulation was conducted on 4 separate decision models representing prophylactic oophorectomy, prophylactic mastectomy, prophylactic oophorectomy/mastectomy and screening. The use of 3 separate preference assessment methods across different populations of women allows researchers to determine how quality adjusted survival varies according to clinical strategy, method of preference assessment and the population from which preferences are assessed. ^
Resumo:
Background. This study was planned at a time when important questions were being raised about the adequacy of using one hormone to treat hypothyroidism instead of two. Specifically, this trial aimed to replicate prior findings which suggested that substituting 12.5 μg of liothyronine for 50 μg of levothyroxine might improve mood, cognition, and physical symptoms. Additionally, this trial aimed to extend findings to fatigue. ^ Methods. A randomized, double-blind, two-period, crossover design was used. Hypothyroid patients stabilized on levothyroxine were invited to participate. Thirty subjects were recruited and randomized. Sequence one received their standard levothyroxine dose in one capsule and placebo in another during the first six weeks. Sequence two received their usual levothyroxine dose minus 50 μg in one capsule and 10 μg of liothyronine in another. At the end of the first six week period, subjects were crossed over. T tests were used to assess carry-over and treatment effects. ^ Results. Twenty-seven subjects completed the trial. The majority of completers had an autoimmune etiology. Mean baseline levothyroxine dose was 121 μg/d (±26.0). Subjects reported small increases in fatigue as measured by the Piper Fatigue Scale (0.9, p = 0.09) and in symptoms of depression measured by the Beck Depression Inventory-II (2.3, p = 0.16) as well as the General Health Questionnaire-30 (4.7, p = 0.14) while treated with substitution treatment. However, none of these differences was statistically significant. Measures of working memory were essentially unchanged between treatments. Thyroid stimulating hormone was about twice as high during substitution treatment (p = 0.16). Free thyroxine index was reduced by 0.7 (p < 0.001), and total serum thyroxine was reduced by 3.0 (p < 0.001) while serum triiodothyronine was increased by 20.5 (p < 0.001) on substitution treatment. ^ Conclusions. Substituting an equivalent amount of liothyronine for a portion of levothyroxine in patients with hypothyroidism does not decrease fatigue, symptoms of depression, or improve working memory. However, due to changes in serum hormone levels and small increments in fatigue and depression symptoms on substitution treatment, a question was raised about the role of T3 in the serum. ^
Resumo:
The scale-up of antiretrovirals (ARVs) to treat HIV/AIDS in Africa has been rapid over the last five years. Botswana was the first African nation to roll out a comprehensive ARV program, where ARVs are available to all citizens who qualify. Excellent adherence to these ARVs is necessary to maintain HIV suppression and on-going health of all individuals taking them. Children rely almost entirely on their caregivers for the administration of these medications, and very little research has been done to examine the factors which affect both adherence and disclosure to the child of their HIV status. ^ Methods. This cross-sectional study used multiple methods to examine adherence, disclosure, and stigma across various dimensions of the child and caregiver's lives, including 30 caregiver questionnaires, interviewer-administered 3-day adherence recalls, pharmacy pill counts, and chart reviews. Fifty in-depth interviews were conducted with caregivers, male caregivers, teenagers, and health care providers. ^ Results. Perceived family stigma was found to be a predictor of excellent adherence. After controlling for age, children who live with their mothers were significantly less likely to know their HIV status than children living with any other relative (OR=0.403, p=0.014). Children who have a grandmother living in the household or taking care of them each day are significantly more likely to have optimal adherence than children who don't have grandmother involvement in their daily lives. ^ Discussion. Visible illness plays an intermediary role between adherence and perceived family stigma: Caregivers know that ARVs suppress physical manifestations of HIV, and in an effort to avoid unnecessary disclosure of the child's status to family members, therefore have children with excellent adherence. Grandmothers play a vital role in supporting the care and treatment of children in Botswana. ^
Resumo:
Since the tragic events of September 11, 2001, the United States has engaged in building the infrastructure and developing the expertise necessary to protect its borders and its citizens from further attacks against its homeland. One approach has been the development of academic courses to educate individuals on the nature and dangers of subversive attacks and to prepare them to respond to attacks and other large-scale emergencies in their roles as working professionals, participating members of their communities, and collaborators with first responders. An initial review of the literature failed to reveal any university-based emergency management courses or programs with a disaster medical component, despite the public health significance and need for such programs. In the Fall of 2003, The School of Management at The University of Texas at Dallas introduced a continuing education Certificate in Emergency Management and Preparedness Program. This thesis will (1) describe the development and implementation of a new Disaster Medical Track as a component of this Certificate in Emergency Management and Preparedness Program, (2) analyze the need for and effectiveness of this Disaster Medical Track, and (3) propose improvements in the track based on this analysis. ^
Resumo:
Introduction and objective. A number of prognostic factors have been reported for predicting survival in patients with renal cell carcinoma. Yet few studies have analyzed the effects of those factors at different stages of the disease process. In this study, different stages of disease progression starting from nephrectomy to metastasis, from metastasis to death, and from evaluation to death were evaluated. ^ Methods. In this retrospective follow-up study, records of 97 deceased renal cell carcinoma (RCC) patients were reviewed between September 2006 to October 2006. Patients with TNM Stage IV disease before nephrectomy or with cancer diagnoses other than RCC were excluded leaving 64 records for analysis. Patient TNM staging, Furhman Grade, age, tumor size, tumor volume, histology and patient gender were analyzed in relation to time to metastases. Time from nephrectomy to metastasis, TNM staging, Furhman Grade, age, tumor size, tumor volume, histology and patient gender were tested for significance in relation to time from metastases to death. Finally, analysis of laboratory values at time of evaluation, Eastern Cooperative Oncology Group performance status (ECOG), UCLA Integrated Staging System (UISS), time from nephrectomy to metastasis, TNM staging, Furhman Grade, age, tumor size, tumor volume, histology and patient gender were tested for significance in relation to time from evaluation to death. Linear regression and Cox Proportional Hazard (univariate and multivariate) was used for testing significance. Kaplan-Meier Log-Rank test was used to detect any significance between groups at various endpoints. ^ Results. Compared to negative lymph nodes at time of nephrectomy, a single positive lymph node had significantly shorter time to metastasis (p<0.0001). Compared to other histological types, clear cell histology had significant metastasis free survival (p=0.003). Clear cell histology compared to other types (p=0.0002 univariate, p=0.038 multivariate) and time to metastasis with log conversion (p=0.028) significantly affected time from metastasis to death. A greater than one year and greater than two year metastasis free interval, compared to patients that had metastasis before one and two years, had statistically significant survival benefit (p=0.004 and p=0.0318). Time from evaluation to death was affected by greater than one year metastasis free interval (p=0.0459), alcohol consumption (p=0.044), LDH (p=0.006), ECOG performance status (p<0.001), and hemoglobin level (p=0.0092). The UISS risk stratified the patient population in a statistically significant manner for survival (p=0.001). No other factors were found to be significant. ^ Conclusion. Clear cell histology is predictive for both time to metastasis and metastasis to death. Nodal status at time of nephrectomy may predict risk of metastasis. The time interval to metastasis significantly predicts time from metastasis to death and time from evaluation to death. ECOG performance status, and hemoglobin levels predicts survival outcome at evaluation. Finally, UISS appropriately stratifies risk in our population. ^
Resumo:
Introduction. 3-hydroxy-3-methylglutaryl CoA reductase inhibitor ("statin") have been widely used for hypercholesteroremia and Statin induced myopathy is well known. Whether Statins contribute to exacerbation of Myasthenia Gravis (MG) requiring hospitalization is not well known. ^ Objectives. To determine the frequency of statin use in patients with MG seen at the neuromuscular division at University of Alabama in Birmingham (UAB) and to evaluate any association between use of statins and MG exacerbations requiring hospitalization in patients with an established diagnosis of Myasthenia Gravis. ^ Methods. We reviewed records of all current MG patients at the UAB neuromuscular department to obtain details on use of statins and any hospitalizations due to exacerbation of MG over the period from January 1, 2003 to December 31, 2006. ^ Results. Of the 113 MG patients on whom information was available for this period, 40 were on statins during at least one clinic visit. Statin users were more likely to be older (mean age 60.2 vs 53.8, p = 0.029), male (70.0% vs 43.8%, p = 0.008), and had a later onset of myasthenia gravis (mean age in years at onset 49.8 versus 42.9, p = 0.051). The total number of hospitalizations or the proportion of subjects who had at least one hospitalization during the study period did not differ in the statin versus no-statin group. However, when hospitalizations which occurred from a suspected precipitant were excluded ("event"), the proportion of subjects who had at least one such event during the study period was higher in the group using statins. In the final Cox proportional hazard model for cumulative time to event, statin use (OR = 6.44, p <0.01) and baseline immunosuppression (OR = 3.03, p = 0.07) were found to increase the odds of event. ^ Conclusions. Statin use may increase the rate of hospitalizations due to MG exacerbation, when excluding exacerbations precipitated by other suspected factors.^
Resumo:
Objectives. Predict who will develop a dissection. To create male and female prediction models using the risk factors: age, ethnicity, hypertension, high cholesterol, smoking, alcohol use, diabetes, heart attack, congestive heart failure, congenital and non-congenital heart disease, Marfan syndrome, and bicuspid aortic valve. ^ Methods. Using 572 patients diagnosed with aortic aneurysms, a model was developed for each of males and females using 80% of the data and then verified using the remaining 20% of the data. ^ Results. The male model predicted the probability of a male in having a dissection (p=0.076) and the female model predicted the probability of a female in having a dissection (p=0.054). The validation models did not support the choice of the developmental models. ^ Conclusions. The best models obtained suggested that those who are at a greater risk of having a dissection are males with non-congenital heart disease and who drink alcohol, and females with non-congenital heart disease and bicuspid aortic valve.^
Resumo:
Context: Despite tremendous strides in HIV treatment over the past decade, resistance remains a major problem. A growing number of patients develop resistance and require new therapies to suppress viral replication. ^ Objective: To assess the safety of multiple administrations of the anti-CD4 receptor (anti-CD4) monoclonal antibody ibalizumab given as intravenous (IV) infusions, in three dosage regimens, in subjects infected with human immunodeficiency virus (HIV-1). ^ Design: Phase 1, multi-center, open-label, randomized clinical trial comparing the safety, pharmacokinetics and antiviral activity of three dosages of ibalizumab. ^ Setting: Six clinical trial sites in the United States. ^ Participants: A total of twenty-two HIV-positive patients on no anti-retroviral therapy or a stable failing regimen. ^ Intervention: Randomized to one of two treatment groups in Arms A and B followed by non-randomized enrollment in Arm C. Patients randomized to Arm A received 10 mg/kg of ibalizumab every 7 days, for a total of 10 doses; patients randomized to Arm B received a total of six doses of ibalizumab; a single loading dose of 10 mg/kg on Day 1 followed by five maintenance doses of 6 mg/kg every 14 days, starting at Week 1. Patients assigned to Arm C received 25 mg/kg of ibalizumab every 14 days for a total of 5 doses. All patients were followed for safety for an additional 7 to 8 weeks. ^ Main Outcome Measures: Clinical and laboratory assessments of safety and tolerability of multiple administrations of ibalizumab in HIV-infected patients. Secondary measures of efficacy include HIV-1 RNA (viral load) measurements. ^ Results: 21 patients were treatment-experienced and 1 was naïve to HIV therapy. Six patients were failing despite therapy and 15 were on no current HIV treatment. Mean baseline viral load (4.78 log 10; range 3.7-5.9) and CD4+ cell counts (332/μL; range 89-494) were similar across cohorts. Mean peak decreases in viral load from baseline of 0.99 log10(1.11 log10, and 0.96 log 10 occurred by Wk 2 in Cohorts A, B and C, respectively. Viral loads decreased by >1.0 log10 in 64%; 4 patients viral loads were suppressed to < 400 copies/mL. Viral loads returned towards baseline by Week 9 with reduced susceptibility to ibalizumab. CD4+ cell counts rose transiently and returned toward baseline. Maximum median elevations above BL in CD4+ cell counts for Cohorts A, B and C were +257, +198 and +103 cells/μL, respectively and occurred within 3 Wks in 16 of 22 subjects. The half-life of ibalizumab was 3-3.5 days and elimination was characteristic of capacity-limited kinetics. Administration of ibalizumab was well tolerated. Four serious adverse events were reported during the study. None of these events were related to study drug. Headache, nausea and cough were the most frequently reported treatment emergent adverse events and there were no laboratory abnormalities related to study drug. ^ Conclusions: Ibalizumab administered either weekly or bi-weekly was safe, well tolerated, and demonstrated antiviral activity. Further studies with ibalizumab in combination with standard antiretroviral treatments are warranted.^
Resumo:
Background. Cardiac risk assessment in cancer patients has not extensively been studied. We evaluated the role of stress myocardial perfusion imaging (MPI) in predicting cardiovascular outcomes in cancer patients undergoing non-cardiac surgery. ^ Methods. A retrospective chart review was performed on 507 patients who had a MPI from 01/2002 - 03/2003 and underwent non-cardiac surgery. Median follow-up duration was 1.5 years. Cox proportional hazard model was used to determine the time-to-first event. End points included total cardiac events (cardiac death, myocardial infarction (MI) and coronary revascularization), cardiac death, and all cause mortality. ^ Results. Of all 507 MPI studies 146 (29%) were abnormal. There were significant differences in risk factors between normal and abnormal MPI groups. Mean age was 66±11 years, with 60% males and a median follow-up duration of 1.8 years (25th quartile=0.8 years, 75th quartile=2.2 years). The majority of patients had an adenosine stress study (53%), with fewer exercise (28%) and dobutamine stress (16%) studies. In the total group there were 39 total cardiac events, 31 cardiac deaths, and 223 all cause mortality events during the study. Univariate predictors of total cardiac events included CAD (p=0.005), previous MI (p=0.005), use of beta blockers (p=0.002), and not receiving chemotherapy (p=0.012). Similarly, the univariate predictors of cardiac death included previous MI (p=0.019) and use of beta blockers (p=0.003). In the multivariate model for total cardiac events, age at surgery (HR 1.04, p=0.030), use of beta blockers (HR 2.46; p=0.011), dobutamine MPI (HR 3.08; p=0.018) and low EF (HR 0.97; p=0.02) were significant predictors of worse outcomes. In the multivariate model for predictors of cardiac death, beta blocker use (HR=2.74; p=0.017) and low EF (HR=0.95; p<0.003) were predictors of cardiac death. The only univariate MPI predictor of total cardiac events was scar severity (p=0.005). While MPI predictors of cardiac death were scar severity (p= 0.001) and ischemia severity (p=0.02). ^ Conclusions. Stress MPI is a useful tool in predicting long term outcomes in cancer patients undergoing surgery. Ejection fraction and severity of myocardial scar are important factors determining long term outcomes in this group.^
Resumo:
Hypertension in adults is defined by risk for cardiovascular morbidity and mortality, but in children, hypertension is defined using population norms. The diagnosis of hypertension in children and adolescents requires only casual blood pressure measurements, but the use of ambulatory blood pressure monitoring to further evaluate patients with elevated blood pressure has been recommended in the Fourth Report on the Diagnosis, Evaluation, and Treatment of High Blood Pressure in Children and Adolescents. The aim of this study is to assess the association between stage of hypertension (using both casual and 24 hour ambulatory blood pressure measurements) and target organ damage defined by left ventricular hypertrophy (LVH) in a sample of children and adolescents in Houston, TX. A retrospective analysis was performed on the primary de-identified data from the combination of participants in two, IRB approved, cross-sectional studies. The studies collected basic demographic data, height, weight, casual blood pressures, ambulatory blood pressures, and left ventricular measurements by echocardiography on children age 8 to 18 years old. Hypertension was defined and staged using the criteria for ambulatory blood pressure reported by Lurbe et al. [1] with some modification. Left ventricular hypertrophy was defined using left ventricular mass index (LVMI) criteria specific for children and adults. The pediatric criterion was LVMI2.7 > 95th percentile for gender and the adult criterion was LVMI2.7 > 51g/m2.7. Participants from the original studies were included in this analysis if they had complete demographic information, anthropometric measures, casual blood pressures, ambulatory blood pressures, and echocardiography data. There were 241 children and adolescents included: 19.1% were normotensive, 17.0% had white coat hypertension, 11.6% had masked hypertension, and 52.4% had confirmed hypertension. Of those with hypertension, 22.4% had stage 1 hypertension, 5.8% had stage 2 hypertension, and 24.1% had stage 3 hypertension. Participants with confirmed hypertension were more likely to have LVH by pediatric criterion than those who were normotensive [OR 2.19, 95% CI (1.04–4.63)]; LVH defined by adult criterion did not differ significantly in normotensives compared with hypertensives [OR 2.08, 95% CI (0.58–7.52)]. However, there was a significant trend in the increased prevalence of LVH across the six blood pressure categories for LVH defined by both pediatric and adult criteria (p < 0.001 and p = 0.02, respectively). Additionally, the mean LVM indexed by height 2.7 had a significantly increased trend across blood pressure stages from normal to stage 3 hypertension (p < 0.02). Pediatric hypertension is defined using population norms, and although children with mild hypertension are not at increased odds of having target organ damage defined by LVH, those with severe hypertension are more likely to have LVH. Staging hypertension by ambulatory blood pressure further describes an individual's risk for LVH target organ damage. ^
Resumo:
Dialysis patients are at high risk for hepatitis B infection, which is a serious but preventable disease. Prevention strategies include the administration of the hepatitis B vaccine. Dialysis patients have been noted to have a poor immune response to the vaccine and lose immunity more rapidly. The long term immunogenicity of the hepatitis B vaccine has not been well defined in pediatric dialysis patients especially if administered during infancy as a routine childhood immunization.^ Purpose. The aim of this study was to determine the median duration of hepatitis B immunity and to study the effect of vaccination timing and other cofactors on the duration of hepatitis B immunity in pediatric dialysis patients.^ Methods. Duration of hepatitis B immunity was determined by Kaplan-Meier survival analysis. Comparison of stratified survival analysis was performed using log-rank analysis. Multivariate analysis by Cox regression was used to estimate hazard ratios for the effect of timing of vaccine administration and other covariates on the duration of hepatitis B immunity.^ Results. 193 patients (163 incident patients) had complete data available for analysis. Mean age was 11.2±5.8 years and mean ESRD duration was 59.3±97.8 months. Kaplan-Meier analysis showed that the total median overall duration of immunity (since the time of the primary vaccine series) was 112.7 months (95% CI: 96.6, 124.4), whereas the median overall duration of immunity for incident patients was 106.3 months (95% CI: 93.93, 124.44). Incident patients had a median dialysis duration of hepatitis B immunity equal to 37.1 months (95% CI: 24.16, 72.26). Multivariate adjusted analysis showed that there was a significant difference between patients based on the timing of hepatitis B vaccination administration (p<0.001). Patients immunized after the start of dialysis had a hazard ratio of 6.13 (2.87, 13.08) for loss of hepatitis B immunity compared to patients immunized as infants (p<0.001).^ Conclusion. This study confirms that patients immunized after dialysis onset have an overall shorter duration of hepatitis B immunity as measured by hepatitis B antibody titers and after the start of dialysis, protective antibody titer levels in pediatric dialysis patients wane rapidly compared to healthy children.^
Resumo:
Objective. To determine the impact of antibiotic associated diarrhea (AAD) on health related quality of life (HRQOL) in hospitalized patients compared to matched controls without diarrhea. ^ Methods. This is a hospital-based, matched case-control study using secondary data from a prospective cohort trial of patients receiving broad-spectrum antibiotics. One hundred and seventy-eight patients were recruited of whom 18 (10%) reported having antibiotic associated diarrhea. Two non-diarrhea controls were selected for each case with diarrhea giving a final sample of 18 cases and 36 controls. Responses from Short Form (SF) 36 questionnaire were aggregated into eight domains including physical functioning (PF), role-functioning physical (RP), bodily pain (BP), general health (GH), social functioning (SF), vitality (VT), role-functioning emotional (RE), and mental health (MH). The eight domains were compared between cases and controls. A GI targeted HRQOL measure was administered to 13 patients with AAD. Responses from the disease-specific instrument were combined in eight subscale scores: dysphoria, interference with activity, body image, health worry, food avoidance, social reaction, sex, and relationships. ^ Results. The sample consisted of 41 females (75.9%) and 13 males (24.1%) aged 53.5 ± 14.4 years (range: 21-76 years). Twenty five patients (46%) were Caucasian, 15 (27%) were African American, 13(24%) were Hispanic and 1(2%) was Asian. In univariate analysis, no significant differences in quality of life outcomes were observed in each of the SF36 domains between the case patients and matched controls. There were trends for decreased scores on the role-functioning physical, bodily pain, general health, social functioning, mental health, and mental summary domains. In total, 7 of 8 domain scores were lower in patients with AAD and 5 of 8 domain scores were lower by more than 5 points (considered clinically significant). Controlling for age, patients with antibiotic associated diarrhea had significantly lower general health, vitality, and mental health scale scores (p<0.05 each). The disease-specific scores were significantly lower in patients with AAD than those in published norms for irritable bowel syndrome patients. ^ Conclusion. In this small sample, several areas of decreased QOL in patients with AAD compared to matched controls were noted. A larger sample size to validate these results is necessary.^