832 resultados para Mortality and race
Resumo:
Background Accidental poisoning is one of the leading causes of injury in the United States, second only to motor vehicle accidents. According to the Centers for Disease Control and Prevention, the rates of accidental poisoning mortality have been increasing in the past fourteen years nationally. In Texas, mortality rates from accidental poisoning have mirrored national trends, increasing linearly from 1981 to 2001. The purpose of this study was to determine if there are spatiotemporal clusters of accidental poisoning mortality among Texas counties, and if so, whether there are variations in clustering and risk according to gender and race/ethnicity. The Spatial Scan Statistic in combination with GIS software was used to identify potential clusters between 1980 and 2001 among Texas counties, and Poisson regression was used to evaluate risk differences. Results Several significant (p < 0.05) accidental poisoning mortality clusters were identified in different regions of Texas. The geographic and temporal persistence of clusters was found to vary by racial group, gender, and race/gender combinations, and most of the clusters persisted into the present decade. Poisson regression revealed significant differences in risk according to race and gender. The Black population was found to be at greatest risk of accidental poisoning mortality relative to other race/ethnic groups (Relative Risk (RR) = 1.25, 95% Confidence Interval (CI) = 1.24 – 1.27), and the male population was found to be at elevated risk (RR = 2.47, 95% CI = 2.45 – 2.50) when the female population was used as a reference. Conclusion The findings of the present study provide evidence for the existence of accidental poisoning mortality clusters in Texas, demonstrate the persistence of these clusters into the present decade, and show the spatiotemporal variations in risk and clustering of accidental poisoning deaths by gender and race/ethnicity. By quantifying disparities in accidental poisoning mortality by place, time and person, this study demonstrates the utility of the spatial scan statistic combined with GIS and regression methods in identifying priority areas for public health planning and resource allocation.
Resumo:
BACKGROUND Low bispectral index values frequently reflect EEG suppression and have been associated with postoperative mortality. This study investigated whether intraoperative EEG suppression was an independent predictor of 90 day postoperative mortality and explored risk factors for EEG suppression. METHODS This observational study included 2662 adults enrolled in the B-Unaware or BAG-RECALL trials. A cohort was defined with >5 cumulative minutes of EEG suppression, and 1:2 propensity-matched to a non-suppressed cohort (≤5 min suppression). We evaluated the association between EEG suppression and mortality using multivariable logistic regression, and examined risk factors for EEG suppression using zero-inflated mixed effects analysis. RESULTS Ninety day postoperative mortality was 3.9% overall, 6.3% in the suppressed cohort, and 3.0% in the non-suppressed cohort {odds ratio (OR) [95% confidence interval (CI)]=2.19 (1.48-3.26)}. After matching and multivariable adjustment, EEG suppression was not associated with mortality [OR (95% CI)=0.83 (0.55-1.25)]; however, the interaction between EEG suppression and mean arterial pressure (MAP) <55 mm Hg was [OR (95% CI)=2.96 (1.34-6.52)]. Risk factors for EEG suppression were older age, number of comorbidities, chronic obstructive pulmonary disease, and higher intraoperative doses of benzodiazepines, opioids, or volatile anaesthetics. EEG suppression was less likely in patients with cancer, preoperative alcohol, opioid or benzodiazepine consumption, and intraoperative nitrous oxide exposure. CONCLUSIONS Although EEG suppression was associated with increasing anaesthetic administration and comorbidities, the hypothesis that intraoperative EEG suppression is a predictor of postoperative mortality was only supported if it was coincident with low MAP. CLINICAL TRIAL REGISTRATION NCT00281489 and NCT00682825.
Resumo:
Independent of traditional risk factors, psychosocial risk factors increase the risk of cardiovascular disease (CVD). Studies in the field of psychotherapy have shown that the construct of incongruence (meaning a discrepancy between desired and achieved goals) affects the outcome of therapy. We prospectively measured the impact of incongruence in patients after undergoing a cardiac rehabilitation program. We examined 198 CVD patients enrolled in a 8–12 week comprehensive cardiac rehabilitation program. Patients completed the German short version of the Incongruence Questionnaire and the SF-36 Health Questionnaire to measure quality of life (QoL) at discharge of rehabilitation. Endpoints at follow-up were CVD-related hospitalizations plus all-cause mortality. During a mean follow-up period of 54.3 months, 29 patients experienced a CVD-related hospitalization and 3 patients died. Incongruence at discharge of rehabilitation was independent of traditional risk factors a significant predictor for CVD-related hospitalizations plus all-cause mortality (HR 2.03, 95% CI 1.29–3.20, p = .002). We also found a significant interaction of incongruence with mental QoL (HR .96, 95% CI .92–.99, p = .027), i.e. incongruence predicted poor prognosis if QoL was low (p = .017), but not if QoL was high (p = .74). Incongruence at discharge predicted future CVD-related hospitalizations plus all-cause mortality and mental QoL moderated this relationship. Therefore, incongruence should be considered for effective treatment planning and outcome measurement.
Resumo:
OBJECTIVES In HIV-negative populations light to moderate alcohol consumption is associated with a lower cardiovascular morbidity and mortality than alcohol abstention. Whether the same holds true for HIV-infected individuals has not been evaluated in detail. DESIGN Cohort study METHODS:: Adults on antiretroviral therapy in the Swiss HIV Cohort Study with follow-up after August 2005 were included. We categorized alcohol consumption into: abstention, low (1-9 g/d), moderate (10-29 g/d in females and 10-39g/d in men) and high alcohol intake. Cox proportional hazards models were used to describe the association between alcohol consumption and cardiovascular disease free survival (combined endpoint) as well as cardiovascular disease events (CADE) and overall survival. Baseline and time-updated risk factors for CADE were included in the models. RESULTS Among 9,741 individuals included, there were 788 events of major CADE or death during 46,719 years of follow-up, corresponding to an incidence of 1.69 events/100 person-years. Follow-up according to alcohol consumption level was 51% abstention, 20% low, 23% moderate and 6% high intake. As compared to abstention, low (hazard ratio 0.79, 95% confidence interval 0.63-0.98) and moderate alcohol intake (0.78, 0.64-0.95) were associated with a lower incidence of the combined endpoint. There was no significant association between alcohol consumption and CADE. CONCLUSIONS Compared to abstention, low and moderate alcohol intake were associated with a better CADE-free survival. However, this result was mainly driven by mortality and the specific impact of drinking patterns and type of alcoholic beverage on this outcome remains to be determined.
Resumo:
The ratio of cystatin C (cysC) to creatinine (crea) is regarded as a marker of glomerular filtration quality associated with cardiovascular morbidities. We sought to determine reference intervals for serum cysC-crea ratio in seniors. Furthermore, we sought to determine whether other low-molecular weight molecules exhibit a similar behavior in individuals with altered glomerular filtration quality. Finally, we investigated associations with adverse outcomes. A total of 1382 subjectively healthy Swiss volunteers aged 60 years or older were enrolled in the study. Reference intervals were calculated according to Clinical & Laboratory Standards Institute (CLSI) guideline EP28-A3c. After a baseline exam, a 4-year follow-up survey recorded information about overall morbidity and mortality. The cysC-crea ratio (mean 0.0124 ± 0.0026 mg/μmol) was significantly higher in women and increased progressively with age. Other associated factors were hemoglobin A1c, mean arterial pressure, and C-reactive protein (P < 0.05 for all). Participants exhibiting shrunken pore syndrome had significantly higher ratios of 3.5-66.5 kDa molecules (brain natriuretic peptide, parathyroid hormone, β2-microglobulin, cystatin C, retinol-binding protein, thyroid-stimulating hormone, α1-acid glycoprotein, lipase, amylase, prealbumin, and albumin) and creatinine. There was no such difference in the ratios of very low-molecular weight molecules (urea, uric acid) to creatinine or in the ratios of molecules larger than 66.5 kDa (transferrin, haptoglobin) to creatinine. The cysC-crea ratio was significantly predictive of mortality and subjective overall morbidity at follow-up in logistic regression models adjusting for several factors. The cysC-crea ratio exhibits age- and sex-specific reference intervals in seniors. In conclusion, the cysC-crea ratio may indicate the relative retention of biologically active low-molecular weight compounds and can independently predict the risk for overall mortality and morbidity in the elderly.
Resumo:
BACKGROUND In an effort to reduce firearm mortality rates in the USA, US states have enacted a range of firearm laws to either strengthen or deregulate the existing main federal gun control law, the Brady Law. We set out to determine the independent association of different firearm laws with overall firearm mortality, homicide firearm mortality, and suicide firearm mortality across all US states. We also projected the potential reduction of firearm mortality if the three most strongly associated firearm laws were enacted at the federal level. METHODS We constructed a cross-sectional, state-level dataset from Nov 1, 2014, to May 15, 2015, using counts of firearm-related deaths in each US state for the years 2008-10 (stratified by intent [homicide and suicide]) from the US Centers for Disease Control and Prevention's Web-based Injury Statistics Query and Reporting System, data about 25 firearm state laws implemented in 2009, and state-specific characteristics such as firearm ownership for 2013, firearm export rates, and non-firearm homicide rates for 2009, and unemployment rates for 2010. Our primary outcome measure was overall firearm-related mortality per 100 000 people in the USA in 2010. We used Poisson regression with robust variances to derive incidence rate ratios (IRRs) and 95% CIs. FINDINGS 31 672 firearm-related deaths occurred in 2010 in the USA (10·1 per 100 000 people; mean state-specific count 631·5 [SD 629·1]). Of 25 firearm laws, nine were associated with reduced firearm mortality, nine were associated with increased firearm mortality, and seven had an inconclusive association. After adjustment for relevant covariates, the three state laws most strongly associated with reduced overall firearm mortality were universal background checks for firearm purchase (multivariable IRR 0·39 [95% CI 0·23-0·67]; p=0·001), ammunition background checks (0·18 [0·09-0·36]; p<0·0001), and identification requirement for firearms (0·16 [0·09-0·29]; p<0·0001). Projected federal-level implementation of universal background checks for firearm purchase could reduce national firearm mortality from 10·35 to 4·46 deaths per 100 000 people, background checks for ammunition purchase could reduce it to 1·99 per 100 000, and firearm identification to 1·81 per 100 000. INTERPRETATION Very few of the existing state-specific firearm laws are associated with reduced firearm mortality, and this evidence underscores the importance of focusing on relevant and effective firearms legislation. Implementation of universal background checks for the purchase of firearms or ammunition, and firearm identification nationally could substantially reduce firearm mortality in the USA. FUNDING None.
Resumo:
The objective of this survey was to determine herd level risk factors for mortality, unwanted early slaughter, and metaphylactic application of antimicrobial group therapy in Swiss veal calves in 2013. A questionnaire regarding farm structure, farm management, mortality and antimicrobial use was sent to all farmers registered in a Swiss label program setting requirements for improved animal welfare and sustainability. Risk factors were determined by multivariable logistic regression. A total of 619 veal producers returned a useable questionnaire (response rate=28.5%), of which 40.9% only fattened their own calves (group O), 56.9% their own calves and additional purchased calves (group O&P), and 2.3% only purchased calves for fattening (group P). A total number of 19,077 calves entered the fattening units in 2013, of which 21.7%, 66.7%, and 11.6% belonged to groups O, O&P, and P, respectively. Mortality was 0% in 322 herds (52.0%), between 0% and 3% in 47 herds (7.6%), and ≥3% in 250 herds (40.4%). Significant risk factors for mortality were purchasing calves, herd size, higher incidence of BRD, and access to an outside pen. Metaphylaxis was used on 13.4% of the farms (7.9% only upon arrival, 4.4% only later in the fattening period, 1.1% upon arrival and later), in 3.2% of the herds of group O, 17.9% of those in group O&P, and 92.9% of those of group P. Application of metaphylaxis upon arrival was positively associated with purchase (OR=8.9) and herd size (OR=1.2 per 10 calves). Metaphylaxis later in the production cycle was positively associated with group size (OR=2.9) and risk of respiratory disease (OR=1.2 per 10% higher risk) and negatively with the use of individual antimicrobial treatment (OR=0.3). In many countries, purchase and a large herd size are inherently connected to veal production. The Swiss situation with large commercial but also smaller herds with little or no purchase of calves made it possible to investigate the effect of these factors on mortality and antimicrobial drug use. The results of this study show that a system where small farms raise the calves from their own herds has a substantial potential to improve animal health and reduce antimicrobial drug use.
Resumo:
Severe liver injury (SLI) due to drugs is a frequent cause of catastrophic illness and hospitalization. Due to significant morbidity, mortality, and excess medical care costs, this poses a challenge as a public health problem. The role of associated risk factors like alcohol consumption in contributing to the high mortality remains to be studied. This study was conducted to assess the impact of alcohol use on mortality in IDILI patients, while adjusting for age, gender, race/ethnicity, and education level. The data from this study indicate only a small excess risk of death among IDILI patients using alcohol, but the difference was not statistically significant. The major contribution of this study to the field of public health is that it excludes a large hazard of alcohol consumption on the mortality among idiosyncratic drug induced liver injury (IDILI) patients. ^
Resumo:
A retrospective cohort study was conducted among 1542 patients diagnosed with CLL between 1970 and 2001 at the M. D. Anderson Cancer Center (MDACC). Changes in clinical characteristics and the impact of CLL on life expectancy were assessed across three decades (1970–2001) and the role of clinical factors on prognosis of CLL were evaluated among patients diagnosed between 1985 and 2001 using Kaplan-Meier and Cox proportional hazards method. Among 1485 CLL patients diagnosed from 1970 to 2001, patients in the recent cohort (1985–2001) were diagnosed at a younger age and an earlier stage compared to the earliest cohort (1970–1984). There was a 44% reduction in mortality among patients diagnosed in 1985–1995 compared to those diagnosed in 1970–1984 after adjusting for age, sex and Rai stage among patients who ever received treatment. There was an overall 11 years (5 years for stage 0) loss of life expectancy among 1485 patients compared with the expected life expectancy based on the age-, sex- and race-matched US general population, with a 43% decrease in the 10-year survival rate. Abnormal cytogenetics was associated with shorter progression-free (PF) survival after adjusting for age, sex, Rai stage and beta-2 microglobulin (beta-2M); whereas, older age, abnormal cytogenetics and a higher beta-2M level were adverse predictors for overall survival. No increased risk of second cancer overall was observed, however, patients who received treatment for CLL had an elevated risk of developing AML and HD. Two out of three patients who developed AML were treated with alkylating agents. In conclusion, CLL patients had improved survival over time. The identification of clinical predictors of PF/overall survival has important clinical significance. Close surveillance of the development of second cancer is critical to improve the quality of life of long-term survivors. ^
Resumo:
Background. In the United States, the incidence of pancreatic cancer has increased; more than 37,000 new cases of pancreatic cancer were diagnosed in the year 2007. Overall, the five-year survival rate is about 5% and pancreatic cancer ranks the fourth leading cause of cancer-related mortality among men and women. Despite the observed progress in cancer diagnosis and treatment, pancreatic cancer remains an unresolved significant public health problem in the United States. Familial pancreatic cancer has been confirmed to be responsible for approximately 10% of pancreatic cancer cases. However, 90% are still without known inherited predisposition. Until now, the role of oral contraceptive pills (OCPs) and hormonal replacement therapy (HRT) among women with pancreatic cancer remain unclear. We examined the association of exogenous hormonal uses in US women with risk of pancreatic cancer. ^ Methods. This was an active hospital-based case-control study which is conducted at the department of gastrointestinal medical oncology in The University of Texas M.D. Anderson Cancer Center. Between January 2005 and December 2007, a total of 287 women with pathologically confirmed pancreatic cancer (cases) and 287 healthy women (controls) were included in this investigation. Both cases and controls were frequency matched by age and race. Information about the use of hormonal contraceptives and hormonal replacement therapy (HRT) preparations as well as information about several risk factors of pancreatic cancer were collected by personal interview. Univariate and multivariate analyses were performed in this study to analyze the data. ^ Results. We found a statistical significant protective effect for use of exogenous hormone preparations on pancreatic cancer development (adjusted odds ratio [AOR], 0.4; 95% confidence interval [CI], 0.2–0.8). In addition, a 40% reduction in pancreatic cancer risk was observed among women who ever used any of the contraceptive methods including oral contraceptive pills (AOR, 6; 95% CI, 0.4–0.9). ^ Conclusions. Consistent with previous studies, the use of exogenous hormone preparations including oral contraceptive pills may confers a protective effect for pancreatic cancer development. More studies are warranted to explore for the underlying mechanism of such protection.^
Resumo:
Cerebrovascular accidents (CVA) or strokes are now the third leading cause of death in the United States. Many who suffer strokes are admitted to rehabilitation centers in order to receive therapy to help rebuild and recovery function. Nutrition plays a significant role in rehabilitation patient outcomes, and is an essential part of comprehensive care. The purpose of this study is to determine if nutrition and diet consistency are directly and independently associated with changes in the Functional Independence Measure (FIM) scores in stroke patients in an acute rehabilitation unit. This study was a retrospective secondary analysis review of medical chart records, and included a total of 84 patients. Patients were divided into groups based on their admission diet: Regular, Dysphagia Advanced, Dysphagia Mechanically Altered, Dysphagia Pureed, and Nutrition Support. Measurements included admission and discharge Total, Motor, and Cognitive FIM scores; BMI, albumin and prealbumin; age, sex, and race. Patients did show a significant improvement in their FIM scores during their stay, with patients on Regular diets having the highest FIM scores. Patients who were more debilitated and had lower FIM scores were usually in one of the altered texture diet groups, or on nutrition support. Prealbumin and BMI were also the highest in patients who had high FIM scores. Patients who were admitted on an altered diet also tended to advance in their diets, which show improvement in overall function. It is crucial to continue to improve nutrition administration to this population to help prevent morbidity and mortality. Proper nutrition in the acute phase of stroke can lay the essential groundwork for recovery.^
Resumo:
Background. Colorectal cancer (CRC) is the third most commonly diagnosed cancer (excluding skin cancer) in both men and women in the United States, with an estimated 148,810 new cases and 49,960 deaths in 2008 (1). Racial/ethnic disparities have been reported across the CRC care continuum. Studies have documented racial/ethnic disparities in CRC screening (2-9), but only a few studies have looked at these differences in CRC screening over time (9-11). No studies have compared these trends in a population with CRC and without cancer. Additionally, although there is evidence suggesting that hospital factors (e.g. teaching hospital status and NCI designation) are associated with CRC survival (12-16), no studies have sought to explain the racial/ethnic differences in survival by looking at differences in socio-demographics, tumor characteristics, screening, co-morbidities, treatment, as well as hospital characteristics. ^ Objectives and Methods. The overall goals of this dissertation were to describe the patterns and trends of racial/ethnic disparities in CRC screening (i.e. fecal occult blood test (FOBT), sigmoidoscopy (SIG) and colonoscopy (COL)) and to determine if racial/ethnic disparities in CRC survival are explained by differences in socio-demographic, tumor characteristics, screening, co-morbidities, treatment, and hospital factors. These goals were accomplished in a two-paper format.^ In Paper 1, "Racial/Ethnic Disparities and Trends in Colorectal Cancer Screening in Medicare Beneficiaries with Colorectal Cancer and without Cancer in SEER Areas, 1992-2002", the study population consisted of 50,186 Medicare beneficiaries diagnosed with CRC from 1992 to 2002 and 62,917 Medicare beneficiaries without cancer during the same time period. Both cohorts were aged 67 to 89 years and resided in 16 Surveillance, Epidemiology and End Results (SEER) regions of the United States. Screening procedures between 6 months and 3 years prior to the date of diagnosis for CRC patients and prior to the index date for persons without cancer were identified in Medicare claims. The crude and age-gender-adjusted percentages and odds ratios of receiving FOBT, SIG, or COL were calculated. Multivariable logistic regression was used to assess race/ethnicity on the odds of receiving CRC screening over time.^ Paper 2, "Racial/Ethnic Disparities in Colorectal Cancer Survival: To what extent are racial/ethnic disparities in survival explained by racial differences in socio-demographics, screening, co-morbidities, treatment, tumor or hospital characteristics", included a cohort of 50,186 Medicare beneficiaries diagnosed with CRC from 1992 to 2002 and residing in 16 SEER regions of the United States which were identified in the SEER-Medicare linked database. Survival was estimated using the Kaplan-Meier method. Cox proportional hazard modeling was used to estimate hazard ratios (HR) of mortality and 95% confidence intervals (95% CI).^ Results. The screening analysis demonstrated racial/ethnic disparities in screening over time among the cohort without cancer. From 1992 to 1995, Blacks and Hispanics were less likely than Whites to receive FOBT (OR=0.75, 95% CI: 0.65-0.87; OR=0.50, 95% CI: 0.34-0.72, respectively) but their odds of screening increased from 2000 to 2002 (OR=0.79, 95% CI: 0.72-0.85; OR=0.67, 95% CI: 0.54-0.75, respectively). Blacks and Hispanics were less likely than Whites to receive SIG from 1992 to 1995 (OR=0.75, 95% CI: 0.57-0.98; OR=0.29, 95% CI: 0.12-0.71, respectively), but their odds of screening increased from 2000 to 2002 (OR=0.79, 95% CI: 0.68-0.93; OR=0.50, 95% CI: 0.35-0.72, respectively).^ The survival analysis showed that Blacks had worse CRC-specific survival than Whites (HR: 1.33, 95% CI: 1.23-1.44), but this was reduced for stages I-III disease after full adjustment for socio-demographic, tumor characteristics, screening, co-morbidities, treatment and hospital characteristics (aHR=1.24, 95% CI: 1.14-1.35). Socioeconomic status, tumor characteristics, treatment and co-morbidities contributed to the reduction in hazard ratios between Blacks and Whites with stage I-III disease. Asians had better survival than Whites before (HR: 0.73, 95% CI: 0.64-0.82) and after (aHR: 0.80, 95% CI: 0.70-0.92) adjusting for all predictors for stage I-III disease. For stage IV, both Asians and Hispanics had better survival than Whites, and after full adjustment, survival improved (aHR=0.73, 95% CI: 0.63-0.84; aHR=0.74, 95% CI: 0.61-0.92, respectively).^ Conclusion. Screening disparities remain between Blacks and Whites, and Hispanics and Whites, but have decreased in recent years. Future studies should explore other factors that may contribute to screening disparities, such as physician recommendations and language/cultural barriers in this and younger populations.^ There were substantial racial/ethnic differences in CRC survival among older Whites, Blacks, Asians and Hispanics. Co-morbidities, SES, tumor characteristics, treatment and other predictor variables contributed to, but did not fully explain the CRC survival differences between Blacks and Whites. Future research should examine the role of quality of care, particularly the benefit of treatment and post-treatment surveillance, in racial disparities in survival.^
Resumo:
The relationship between serum cholesterol and cancer incidence was investigated in the population of the Hypertension Detection and Follow-up Program (HDFP). The HDFP was a multi-center trial designed to test the effectiveness of a stepped program of medication in reducing mortality associated with hypertension. Over 10,000 participants, ages 30-69, were followed with clinic and home visits for a minimum of five years. Cancer incidence was ascertained from existing study documents, which included hospitalization records, autopsy reports and death certificates. During the five years of follow-up, 286 new cancer cases were documented. The distribution of sites and total number of cases were similar to those predicted using rates from the Third National Cancer Survey. A non-fasting baseline serum cholesterol level was available for most participants. Age, sex, and race specific five-year cancer incidence rates were computed for each cholesterol quartile. Rates were also computed by smoking status, education status, and percent ideal weight quartiles. In addition, these and other factors were investigated with the use of the multiple logistic model.^ For all cancers combined, a significant inverse relationship existed between baseline serum cholesterol levels and cancer incidence. Previously documented associations between smoking, education and cancer were also demonstrated but did not account for the relationship between serum cholesterol and cancer. The relationship was more evident in males than females but this was felt to represent the different distribution of occurrence of specific cancer sites in the two sexes. The inverse relationship existed for all specific sites investigated (except breast) although a level of statistical significance was reached only for prostate carcinoma. Analyses after exclusion of cases diagnosed during the first two years of follow-up still yielded an inverse relationship. Life table analysis indicated that competing risks during the period of follow-up did not account for the existence of an inverse relationship. It is concluded that a weak inverse relationship does exist between serum cholesterol for many but not all cancer sites. This relationship is not due to confounding by other known cancer risk factors, competing risks or persons entering the study with undiagnosed cancer. Not enough information is available at the present time to determine whether this relationship is causal and further research is suggested. ^
Resumo:
Context: Black women are reported to have a higher prevalence of uterine fibroids, and a threefold higher incidence rate and relative risk for clinical uterine fibroid development as compared to women of other races. Uterine fibroid research has reported that black women experience greater uterine fibroid morbidity and disproportionate uterine fibroid disease burden. With increased interest in understanding uterine fibroid development, and race being a critical component of uterine fibroid assessment, it is imperative that the methods used to determine the race of research participants is defined and the operational definition of the use of race as a variable is reported for methodological guidance, and to enable the research community to compare statistical data and replicate studies. ^ Objectives: To systematically review and evaluate the methods used to assess race and racial disparities in uterine fibroid research. ^ Data Sources: Databases searched for this review include: OVID Medline, NML PubMed, Ebscohost Cumulative Index to Nursing and Allied Health Plus with Full Text, and Elsevier Scopus. ^ Review Methods: Articles published in English were retrieved from data sources between January 2011 and March 2011. Broad search terms, uterine fibroids and race, were employed to retrieve a comprehensive list of citations for review screening. The initial database yield included 947 articles, after duplicate extraction 485 articles remained. In addition, 771 bibliographic citations were reviewed to identify additional articles not found through the primary database search, of which 17 new articles were included. In the first screening, 502 titles and abstracts were screened against eligibility questions to determine citations of exclusion and to retrieve full text articles for review. In the second screening, 197 full texted articles were screened against eligibility questions to determine whether or not they met full inclusion/exclusion criteria. ^ Results: 100 articles met inclusion criteria and were used in the results of this systematic review. The evidence suggested that black women have a higher prevalence of uterine fibroids when compared to white women. None of the 14 studies reporting data on prevalence reported an operational definition or conceptual framework for the use of race. There were a limited number of studies reporting on the prevalence of risk factors among racial subgroups. Of the 3 studies, 2 studies reported prevalence of risk factors lower for black women than other races, which was contrary to hypothesis. And, of the three studies reporting on prevalence of risk factors among racial subgroups, none of them reported a conceptual framework for the use of race. ^ Conclusion: In the 100 uterine fibroid studies included in this review over half, 66%, reported a specific objective to assess and recruit study participants based upon their race and/or ethnicity, but most, 51%, failed to report a method of determining the actual race of the participants, and far fewer, 4% (only four South American studies), reported a conceptual framework and/or operational definition of race as a variable. However, most, 95%, of all studies reported race-based health outcomes. The inadequate methodological guidance on the use of race in uterine fibroid studies, purporting to assess race and racial disparities, may be a primary reason that uterine fibroid research continues to report racial disparities, but fails to understand the high prevalence and increased exposures among African-American women. A standardized method of assessing race throughout uterine fibroid research would appear to be helpful in elucidating what race is actually measuring, and the risk of exposures for that measurement. ^
Resumo:
Left ventricular outflow tract (LVOT) defects are an important group of congenital heart defects (CHDs) because of their associated mortality and long-term complications. LVOT defects include aortic valve stenosis (AVS), coarctation of aorta (CoA), and hypoplastic left heart syndrome (HLHS). Despite their clinical significance, their etiology is not completely understood. Even though the individual component phenotypes (AVS, CoA, and HLHS) may have different etiologies, they are often "lumped" together in epidemiological studies. Though "lumping" of component phenotypes may improve the power to detect associations, it may also lead to ambiguous findings if these defects are etiologically distinct. This is due to potential for effect heterogeneity across component phenotypes. ^ This study had two aims: (1) to identify the association between various risk factors and both the component (i.e., split) and composite (i.e., lumped) LVOT phenotypes, and (2) to assess the effect heterogeneity of risk factors across component phenotypes of LVOT defects. ^ This study was a secondary data analysis. Primary data were obtained from the Texas Birth Defect Registry (TBDR). TBDR uses an active surveillance method to ascertain birth defects in Texas. All cases of non complex LVOT defects which met our inclusion criteria during the period of 2002–2008 were included in the study. The comparison groups included all unaffected live births for the same period (2002–2008). Data from vital statistics were used to evaluate associations. Statistical associations between selected risk factors and LVOT defects was determined by calculating crude and adjusted prevalence ratio using Poisson regression analysis. Effect heterogeneity was evaluated using polytomous logistic regression. ^ There were a total of 2,353 cases of LVOT defects among 2,730,035 live births during the study period. There were a total of 1,311 definite cases of non-complex LVOT defects for analysis after excluding "complex" cardiac cases and cases associated with syndromes (n=168). Among infant characteristics, males were at a significantly higher risk of developing LVOT defects compared to females. Among maternal characteristics, significant associations were seen with maternal age > 40 years (compared to maternal age 20–24 years) and maternal residence in Texas-Mexico border (compared to non-border residence). Among birth characteristics, significant associations were seen with preterm birth and small for gestation age LVOT defects. ^ When evaluating effect heterogeneity, the following variables had significantly different effects among the component LVOT defect phenotypes: infant sex, plurality, maternal age, maternal race/ethnicity, and Texas-Mexico border residence. ^ This study found significant associations between various demographic factors and LVOT defects. While many findings from this study were consistent with results from previous studies, we also identified new factors associated with LVOT defects. Additionally, this study was the first to assess effect heterogeneity across LVOT defect component phenotypes. These findings contribute to a growing body of literature on characteristics associated with LVOT defects. ^