984 resultados para Population Surveillance


Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE To assess whether palliative primary tumor resection in colorectal cancer patients with incurable stage IV disease is associated with improved survival. BACKGROUND There is a heated debate regarding whether or not an asymptomatic primary tumor should be removed in patients with incurable stage IV colorectal disease. METHODS Stage IV colorectal cancer patients were identified in the Surveillance, Epidemiology, and End Results database between 1998 and 2009. Patients undergoing surgery to metastatic sites were excluded. Overall survival and cancer-specific survival were compared between patients with and without palliative primary tumor resection using risk-adjusted Cox proportional hazard regression models and stratified propensity score methods. RESULTS Overall, 37,793 stage IV colorectal cancer patients were identified. Of those, 23,004 (60.9%) underwent palliative primary tumor resection. The rate of patients undergoing palliative primary cancer resection decreased from 68.4% in 1998 to 50.7% in 2009 (P < 0.001). In Cox regression analysis after propensity score matching primary cancer resection was associated with a significantly improved overall survival [hazard ratio (HR) of death = 0.40, 95% confidence interval (CI) = 0.39-0.42, P < 0.001] and cancer-specific survival (HR of death = 0.39, 95% CI = 0.38-0.40, P < 0.001). The benefit of palliative primary cancer resection persisted during the time period 1998 to 2009 with HRs equal to or less than 0.47 for both overall and cancer-specific survival. CONCLUSIONS On the basis of this population-based cohort of stage IV colorectal cancer patients, palliative primary tumor resection was associated with improved overall and cancer-specific survival. Therefore, the dogma that an asymptomatic primary tumor never should be resected in patients with unresectable colorectal cancer metastases must be questioned.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

After an outbreak of Yersinia enterocolitica at a NHP research facility, we performed a multispecies investigation of the prevalence of Yersinia spp. in various mammals that resided or foraged on the grounds of the facility, to better understand the epizootiology of yersiniosis. Blood samples and fecal and rectal swabs were obtained from 105 captive African green monkeys (AGM), 12 feral cats, 2 dogs, 20 mice, 12 rats, and 3 mongooses. Total DNA extracted from swab suspensions served as template for the detection of Y. enterocolitica DNA by real-time PCR. Neither Y. enterocolitica organisms nor their DNA were detected from any of these samples. However, Western blotting revealed the presence of Yersinia antibodies in plasma. The AGM samples revealed a seroprevalence of 91% for Yersinia spp. and of 61% for Y. enterocolitica specifically. The AGM that were housed in cages where at least one fatality occurred during the outbreak (clinical group) had similar seroprevalence to that of AGM housed in unaffected cages (nonclinical group). However, the nonclinical group was older than the clinical group. In addition, 25%, 100%, 33%, 10%, and 10% of the sampled local cats, dogs, mongooses, rats, and mice, respectively, were seropositive. The high seroprevalence after this outbreak suggests that Y. enterocolitica was transmitted effectively through the captive AGM population and that age was an important risk factor for disease. Knowledge regarding local environmental sources of Y. enterocolitica and the possible role of wildlife in the maintenance of yersiniosis is necessary to prevent and manage this disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We obtained partial carcass condemnation (PCC) data for cattle (2009-2010) from a Swiss slaughterhouse. Data on whole carcass condemnations (WCC) carried out at the same slaughterhouse over those years were extracted from the national database for meat inspection. We found that given the differences observed in the WCC and PCC time series, it is likely that both indicators respond to different health events in the population and that one cannot be substituted by the other. Because PCC recordings are promising for syndromic surveillance, the meat inspection database should be capable to record both WCC and PCC data in the future. However, a standardised list of reasons for PCC needs to be defined and used nationwide in all slaughterhouses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We used meat-inspection data collected over a period of three years in Switzerland to evaluate slaughterhouse-level, farm-level and animal-level factors that may be associated with whole carcass condemnation (WCC) in cattle after slaughter. The objective of this study was to identify WCC risk factors so they can be communicated to, and managed by, the slaughter industry and veterinary services. During meat inspection, there were three main important predictors of the risk of WCC; the slaughtered animal's sex, age, and the size of the slaughterhouse it was processed in. WCC for injuries and significant weight loss (visible welfare indicators) were almost exclusive to smaller slaughterhouses. Cattle exhibiting clinical syndromes that were not externally visible (e.g. pneumonia lesions) and that are associated with fattening of cattle, end up in larger slaughterhouses. For this reason, it is important for animal health surveillance to collect data from both types of slaughterhouses. Other important risk factors for WCC were on-farm mortality rate and the number of cattle on the farm of origin. This study highlights the fact that the many risk factors for WCC are as complex as the production system itself, with risk factors interacting with one another in ways which are sometimes difficult to interpret biologically. Risk-based surveillance aimed at farms with reoccurring health problems (e.g. a history of above average condemnation rates) may be more appropriate than the selection, of higher-risk animals arriving at slaughter. In Switzerland, the introduction of a benchmarking system that would provide feedback to the farmer with information on condemnation reasons, and his/her performance compared to the national/regional average could be a first step towards improving herd-management and financial returns for producers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Syndromic surveillance (SyS) systems currently exploit various sources of health-related data, most of which are collected for purposes other than surveillance (e.g. economic). Several European SyS systems use data collected during meat inspection for syndromic surveillance of animal health, as some diseases may be more easily detected post-mortem than at their point of origin or during the ante-mortem inspection upon arrival at the slaughterhouse. In this paper we use simulation to evaluate the performance of a quasi-Poisson regression (also known as an improved Farrington) algorithm for the detection of disease outbreaks during post-mortem inspection of slaughtered animals. When parameterizing the algorithm based on the retrospective analyses of 6 years of historic data, the probability of detection was satisfactory for large (range 83-445 cases) outbreaks but poor for small (range 20-177 cases) outbreaks. Varying the amount of historical data used to fit the algorithm can help increasing the probability of detection for small outbreaks. However, while the use of a 0·975 quantile generated a low false-positive rate, in most cases, more than 50% of outbreak cases had already occurred at the time of detection. High variance observed in the whole carcass condemnations time-series, and lack of flexibility in terms of the temporal distribution of simulated outbreaks resulting from low reporting frequency (monthly), constitute major challenges for early detection of outbreaks in the livestock population based on meat inspection data. Reporting frequency should be increased in the future to improve timeliness of the SyS system while increased sensitivity may be achieved by integrating meat inspection data into a multivariate system simultaneously evaluating multiple sources of data on livestock health.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Public Health and medicine are complimentary disciplines dedicated to the health and well-being of humankind. Worldwide, medical school accreditation bodies require the inclusion of population health in medical education. In 2003, the Institutes of Medicine (IOM) recommended that all medical students receive basic public health training in population-based prevention. The purpose of this study was to (1) examine the public health clinical performance of third-year medical students at two independent medical schools, (2) compare the public health clinical practice performance of the schools, and (3) identify underlying predictors of high and low public health clinical performance at one of the medical schools. ^ This study is unique in its analysis and report of observed medical student public health clinical practices. The cohort consisted of 751 third-year medical students who completed a required clinical performance exam using trained standardized patients. Medical student performance scores on 24 consensus public health items derived from nine patient cases were analyzed.^ The analysis showed nearly identical results for both medical schools at the 60%, 65%, and 70% pass rate. Students performed poorly on items associated with prevention, behavioral science, and surveillance. Factors associated with high student performance included being from an underrepresented minority, matching to a primary care residency, and high class ranking. A review of medical school curriculum at both schools revealed a lack of training in four public health domains. Nationally, 32% of medical students reported inadequate training in public health in the year 2006.^ These findings suggest more dedicated teaching time for public health domains is needed at the medical schools represented in this study. Finally, more research is needed to assess attainment of public health knowledge and skills for medical students nationwide if we are to meet the recommendations of the IOM. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective. The objective of this study is to determine the prevalence of MRSA colonization in adult patients admitted to intensive care units at an urban tertiary care hospital in Houston, Texas and to evaluate the risk factors associated with colonization during a three month active-screening pilot project. Design. This study used secondary data from a small cross-sectional pilot project. Methods. All patients admitted to the seven specialty ICUs were screened for MRSA by nasal culture. Results were obtained utilizing the BD GeneOhm™ IDI-MRSA assay in vitro diagnostic test, for rapid MRSA detection. Statistical analysis was performed using the STATA 10, Epi Info, and JavaStat. Results . 1283/1531 (83.4%) adult ICU admissions were screened for nasal MRSA colonization. Of those screened, demographic and risk factor data was available for 1260/1283 (98.2%). Unresolved results were obtained for 73 patients. Therefore, a total of 1187/1531 (77.5%) of all ICU admissions during the three month study period are described in this analysis. Risk factors associated with colonization included the following: hospitalization within the last six months (odds ratio 2.48 [95% CI, 1.70-3.63], p=0.000), hospitalization within the last 12 months, (odds ratio 2.27 [95% CI, 1.57-3.80], p=0.000), and having diabetes mellitus (odds ratio 1.63 [95% CI, 1.14-2.32], p=0.007). Conclusion. Based on the literature, the prevalence of MRSA for this population is typical of other prevalence studies conducted in the United States and coincides with the continual increasing trend of MRSA colonization. Significant risk factors were similar to those found in previous studies. Overall, the active surveillance screening pilot project has provided valuable information on a population not widely addressed. These findings can aid in future interventions for the education, control, prevention, and treatment of MRSA. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study assesses adolescent's health issues in Comal County, TX. Adolescents are defined as youth between the ages of 12 to 17 years of age, who resided in Comal County during the time period of 2000 to 2007. The analysis focused on high risk behaviors including use of gateway drugs—tobacco and alcohol; illegal substance use; and reproductive health related indicators, including sexual activity, sexually transmitted diseases, and pregnancy. This study is based on the primary and secondary data collected as part of the 2008 Comal County Community Assessment. It compares findings from the primary data sources to extant data from four secondary data sources including: (1) The Centers for Disease Control & Prevention (national) Healthy People 2010; (2) The Centers for Disease Control & Prevention, Youth Risk Behavior Surveillance Survey, 2007; (3) The Texas Department of State Health Services, 2000 to 2007; and The Pride Survey (Local and Statewide). The methods are drawn from the literature on "rapid epidemiologic appraisal" (Annett H. & Rifkin S. B., 1988). The study focus on corroborating the perceptions, subjective concerns, opinions and beliefs of the Comal County key stakeholders and community participants with qualitative and quantitative indicators of health and well being. The value of this approach is to inform community leaders using a public health perspective and evidence in their decisions about priority setting and resources allocation activities for prevention of high risk behaviors and promotion of adolescent health and well being. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Public health surveillance programs for vaccine preventable diseases (VPD) need functional quality assurance (QA) in order to operate with high quality activities to prevent preventable communicable diseases from spreading in the community. Having a functional QA plan can assure the performance and quality of a program without putting excessive stress on the resources. A functional QA plan acts as a check on the quality of day-to-day activities performed by the VPD surveillance program while also providing data that would be useful for evaluating the program. This study developed a QA plan that involves collection, collation, analysis and reporting of information based on standardized (predetermined) formats and indicators as an integral part of routine work for the vaccine preventable disease surveillance program at the City of Houston Department of Health and Human Services. The QA plan also provides sampling and analysis plans for assessing various QA indicators, as well as recommendations to the Houston Department of Health and Humans Services for implementation of the QA plan. The QA plan developed for VPD surveillance in the City of Houston is intended to be a low cost system that could serve as a template for QA plans as part of other public health programs not only in the city or the nation, but could be adapted for use anywhere across the globe. Having a QA plan for VPD surveillance in the City of Houston would serve well for the funding agencies like the CDC by assuring that the resources are being expended efficiently, while achieving the real goal of positively impacting the health and lives of the recipient/target population. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. Colorectal cancer (CRC) is the third most commonly diagnosed cancer (excluding skin cancer) in both men and women in the United States, with an estimated 148,810 new cases and 49,960 deaths in 2008 (1). Racial/ethnic disparities have been reported across the CRC care continuum. Studies have documented racial/ethnic disparities in CRC screening (2-9), but only a few studies have looked at these differences in CRC screening over time (9-11). No studies have compared these trends in a population with CRC and without cancer. Additionally, although there is evidence suggesting that hospital factors (e.g. teaching hospital status and NCI designation) are associated with CRC survival (12-16), no studies have sought to explain the racial/ethnic differences in survival by looking at differences in socio-demographics, tumor characteristics, screening, co-morbidities, treatment, as well as hospital characteristics. ^ Objectives and Methods. The overall goals of this dissertation were to describe the patterns and trends of racial/ethnic disparities in CRC screening (i.e. fecal occult blood test (FOBT), sigmoidoscopy (SIG) and colonoscopy (COL)) and to determine if racial/ethnic disparities in CRC survival are explained by differences in socio-demographic, tumor characteristics, screening, co-morbidities, treatment, and hospital factors. These goals were accomplished in a two-paper format.^ In Paper 1, "Racial/Ethnic Disparities and Trends in Colorectal Cancer Screening in Medicare Beneficiaries with Colorectal Cancer and without Cancer in SEER Areas, 1992-2002", the study population consisted of 50,186 Medicare beneficiaries diagnosed with CRC from 1992 to 2002 and 62,917 Medicare beneficiaries without cancer during the same time period. Both cohorts were aged 67 to 89 years and resided in 16 Surveillance, Epidemiology and End Results (SEER) regions of the United States. Screening procedures between 6 months and 3 years prior to the date of diagnosis for CRC patients and prior to the index date for persons without cancer were identified in Medicare claims. The crude and age-gender-adjusted percentages and odds ratios of receiving FOBT, SIG, or COL were calculated. Multivariable logistic regression was used to assess race/ethnicity on the odds of receiving CRC screening over time.^ Paper 2, "Racial/Ethnic Disparities in Colorectal Cancer Survival: To what extent are racial/ethnic disparities in survival explained by racial differences in socio-demographics, screening, co-morbidities, treatment, tumor or hospital characteristics", included a cohort of 50,186 Medicare beneficiaries diagnosed with CRC from 1992 to 2002 and residing in 16 SEER regions of the United States which were identified in the SEER-Medicare linked database. Survival was estimated using the Kaplan-Meier method. Cox proportional hazard modeling was used to estimate hazard ratios (HR) of mortality and 95% confidence intervals (95% CI).^ Results. The screening analysis demonstrated racial/ethnic disparities in screening over time among the cohort without cancer. From 1992 to 1995, Blacks and Hispanics were less likely than Whites to receive FOBT (OR=0.75, 95% CI: 0.65-0.87; OR=0.50, 95% CI: 0.34-0.72, respectively) but their odds of screening increased from 2000 to 2002 (OR=0.79, 95% CI: 0.72-0.85; OR=0.67, 95% CI: 0.54-0.75, respectively). Blacks and Hispanics were less likely than Whites to receive SIG from 1992 to 1995 (OR=0.75, 95% CI: 0.57-0.98; OR=0.29, 95% CI: 0.12-0.71, respectively), but their odds of screening increased from 2000 to 2002 (OR=0.79, 95% CI: 0.68-0.93; OR=0.50, 95% CI: 0.35-0.72, respectively).^ The survival analysis showed that Blacks had worse CRC-specific survival than Whites (HR: 1.33, 95% CI: 1.23-1.44), but this was reduced for stages I-III disease after full adjustment for socio-demographic, tumor characteristics, screening, co-morbidities, treatment and hospital characteristics (aHR=1.24, 95% CI: 1.14-1.35). Socioeconomic status, tumor characteristics, treatment and co-morbidities contributed to the reduction in hazard ratios between Blacks and Whites with stage I-III disease. Asians had better survival than Whites before (HR: 0.73, 95% CI: 0.64-0.82) and after (aHR: 0.80, 95% CI: 0.70-0.92) adjusting for all predictors for stage I-III disease. For stage IV, both Asians and Hispanics had better survival than Whites, and after full adjustment, survival improved (aHR=0.73, 95% CI: 0.63-0.84; aHR=0.74, 95% CI: 0.61-0.92, respectively).^ Conclusion. Screening disparities remain between Blacks and Whites, and Hispanics and Whites, but have decreased in recent years. Future studies should explore other factors that may contribute to screening disparities, such as physician recommendations and language/cultural barriers in this and younger populations.^ There were substantial racial/ethnic differences in CRC survival among older Whites, Blacks, Asians and Hispanics. Co-morbidities, SES, tumor characteristics, treatment and other predictor variables contributed to, but did not fully explain the CRC survival differences between Blacks and Whites. Future research should examine the role of quality of care, particularly the benefit of treatment and post-treatment surveillance, in racial disparities in survival.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. A few studies have reported gender differences along the colorectal cancer (CRC) continuum but none has done so longitudinally to compare a cancer and a non-cancer populations.^ Objectives and Methods. To examine gender differences in colorectal cancer screening (CRCS); to examine trends in gender differences in CRC screening among two groups of patients (Medicare beneficiaries with and without cancer); to examine gender differences in CRC incidence; and to examine for any differences over time. In Paper 1, the study population consisted of men and women, ages 67–89 years, with CRC (73,666) or without any cancer (39,006), residing in 12 U.S. Surveillance Epidemiology and End-Results (SEER) regions. Crude and age-adjusted percentages and odds ratios of receiving fecal occult blood test (FOBT), sigmoidoscopy (SIG), or colonoscopy (COL) were calculated. Multivariable logistic regression was used to assess gender on the odds of receiving CRC screening over time.^ In Paper 2, age-adjusted incidence rates and proportions over time were reported across race, CRC subsite, CRC stage and SEER region for 373,956 patients, ages 40+ years, residing in 9 SEER regions and diagnosed with malignant CRC. ^ Results. Overall, women had higher CRC screening rates than men and screening rates in general were higher in the SEER sample of persons with CRC diagnosis. Significant temporal divergence in FOBT screening was observed between men and women in both cohorts. Although the largest temporal increases in screening rates were found for COL, especially among the cohort with CRC, little change in the gender gap was observed over time. Receipt of FOBT was significantly associated with female gender especially in the period of full Medicare coverage. Receipt of COL was also significantly associated with male gender, especially in the period of limited Medicare coverage.^ Overall, approximately equal numbers of men (187,973) and women (185,983) were diagnosed with malignant CRC. Men had significantly higher age-adjusted CRC incidence rates than women across all categories of age, race, subsite, stage and SEER region even though rates declined in all categories over time. Significant moderate increases in rate difference occurred among 40-59 year olds; significant reductions occurred among patients age 70+, within subsite rectum, unstaged and distant stage CRC, and eastern and western SEER regions. ^ Conclusions. Persistent gender differences in CRC incidence across time may have implications for gender-based interventions that take age into consideration. A shift toward proximal cancer was observed over time for both genders, but the high proportion of men who develop rectal cancer suggests that a greater proportion of men may need to be targeted with newer screening methods such as fecal DNA or COL. Although previous reports have documented higher CRC screening among men, higher incidence of CRC observed among men suggests that higher risk categories of men are probably not being reached. FOBT utilization rates among women have increased over time and the gender gap has widened between 1998 and 2005. COL utilization is associated with male gender but the differences over time are small.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. Beginning September 2, 2005, San Antonio area shelters received approximately 12,700 evacuees from Hurricane Katrina. Two weeks later, another 12,000 evacuees from Hurricane Rita arrived. By mid-October, 2005, the in-shelter population was 1,000 people. There was concern regarding the potential for spread of infectious diseases in the shelter. San Antonio Metropolitan Health District (SAMHD) established a syndromic surveillance system with Comprehensive Health Services (CHS) who provided on-site health care. CHS was in daily contact with SAMHD to report symptoms of concern until the shelter closed December 23, 2005. ^ Study type. The objective of this study was to assess the methods used and describe the practical considerations involved in establishing and managing a syndromic surveillance system, as established by the SAMHD in the long-term shelter clinic maintained by CHS for the hurricane evacuees. ^ Methods. Information and descriptive data used in this study was collected from multiple sources, primarily from the San Antonio Metropolitan Health District’s 2006 Report on Syndromic Surveillance of a Long-Term Shelter by Hausler & Rohr-Allegrini. SAMHD and CHS staff ensured that each clinic visit was recorded by date, demographic information, chief complaint and medical disposition. Logs were obtained daily and subsequently entered into a Microsoft Access database and analyzed in Excel. ^ Results. During a nine week period, 4,913 clinic visits were recorded, reviewed and later analyzed. Repeat visits comprised 93.0% of encounters. Chronic illnesses contributed to 21.7% of the visits. Approximately 54.0% were acute care encounters. Of all encounters, 17.3% had infectious disease potential as primarily gastrointestinal and respiratory syndromes. Evacuees accounted for 86% and staff 14% of all visits to the shelter clinic. There were 782 unduplicated individuals who sought services at the clinic, comprised of 63% (496) evacuees and 36% (278) staff members. Staff were more likely to frequent the clinic but for fewer visits each. ^ Conclusion. The presence of health care services and syndromic surveillance provided the opportunity to recognize, document and intervene in any disease outbreak at this long-term shelter. Constant vigilance allowed SAMHD to inform and reassure concerned people living and working in the shelter and living outside the shelter.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Invasive pneumococcal disease (IPD) causes significant health burden in the US, is responsible for the majority of bacterial meningitis, and causes more deaths than any other vaccine preventable bacterial disease in the US. The estimated National IPD rate is 14.3 cases per 100,000 population with a case-fatality rate of 1.5 cases per 100,000 population. Although cases of IPD are routinely reported to the local health department in Harris County Texas, the incidence (IR) and case-fatality (CFR) rates have not been reported. Additionally, it is important to know which serotypes of S. pneumoniae are circulating in Harris County Texas and to determine if ‘replacement disease’ is occurring. ^ This study reported incidence and case-fatality rates from 2003 to 2009, and described the trends in IPD, including the IPD serotypes circulating in Harris County Texas during the study period, particularly in 2008 and 2010. Annual incidence rates were calculated and reported for 2003 to 2009, using complete surveillance-year data. ^ Geographic information system (GIS) software was used to create a series of maps of the data reported during the study period. Cluster and outlier analysis and hot spot analysis were conducted using both case counts by census tract and disease rate by census tract. ^ IPD age- and race-adjusted IR for Harris County Texas and their 95% confidence intervals (CIs) were 1.40 (95% CI 1.0, 1.8), 1.71 (95% CI 1.24, 2.17), 3.13 (95% CI 2.48, 3.78), 3.08 (95% CI 2.43, 3.74), 5.61 (95% CI 4.79, 6.43), 8.11 (95% CI 7.11, 9.1), and 7.65 (95% CI 6.69, 8.61) for the years 2003 to 2009, respectively (rates were age- and race-adjusted to each year's midyear US population estimates). A Poisson regression model demonstrated a statistically significant increasing trend of about 32 percent per year in the IPD rates over the course of the study period. IPD age- and race-adjusted case-fatality rates (CFR) for Harris County Texas were also calculated and reported. A Poisson regression model demonstrated a statistically significant increasing trend of about 26 percent per year in the IPD case-fatality rates from 2003 through 2009. A logistic regression model associated the risk of dying from IPD to alcohol abuse (OR 4.69, 95% CI 2.57, 8.56) and to meningitis (OR 2.42, 95% CI 1.46, 4.03). ^ The prevalence of non-vaccine serotypes (NVT) among IPD cases with serotyped isolates was 98.2 percent. In 2008, the year with the sample more geographically representative of all areas of Harris County Texas, the prevalence was 96 percent. Given these findings, it is reasonable to conclude that ‘replacement disease’ is occurring in Harris County Texas, meaning that, the majority of IPD is caused by serotypes not included in the PCV7 vaccine. Also in conclusion, IPD rates increased during the study period in Harris County Texas.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To reach the goals established by the Institute of Medicine (IOM) and the Centers for Disease Control's (CDC) STOP TB USA, measures must be taken to curtail a future peak in Tuberculosis (TB) incidence and speed the currently stagnant rate of TB elimination. Both efforts will require, at minimum, the consideration and understanding of the third dimension of TB transmission: the location-based spread of an airborne pathogen among persons known and unknown to each other. This consideration will require an elucidation of the areas within the U.S. that have endemic TB. The Houston Tuberculosis Initiative (HTI) was a population-based active surveillance of confirmed Houston/Harris County TB cases from 1995–2004. Strengths in this dataset include the molecular characterization of laboratory confirmed cases, the collection of geographic locations (including home addresses) frequented by cases, and the HTI time period that parallels a decline in TB incidence in the United States (U.S.). The HTI dataset was used in this secondary data analysis to implement a GIS analysis of TB cases, the locations frequented by cases, and their association with risk factors associated with TB transmission. ^ This study reports, for the first time, the incidence of TB among the homeless in Houston, Texas. The homeless are an at-risk population for TB disease, yet they are also a population whose TB incidence has been unknown and unreported due to their non-enumeration. The first section of this dissertation identifies local areas in Houston with endemic TB disease. Many Houston TB cases who reported living in these endemic areas also share the TB risk factor of current or recent homelessness. Merging the 2004–2005 Houston enumeration of the homeless with historical HTI surveillance data of TB cases in Houston enabled this first-time report of TB risk among the homeless in Houston. The homeless were more likely to be US-born, belong to a genotypic cluster, and belong to a cluster of a larger size. The calculated average incidence among homeless persons was 411/100,000, compared to 9.5/100,000 among housed. These alarming rates are not driven by a co-infection but by social determinants. The unsheltered persons were hospitalized more days and required more follow-up time by staff than those who reported a steady housing situation. The homeless are a specific example of the increased targeting of prevention dollars that could occur if TB rates were reported for specific areas with known health disparities rather than as a generalized rate normalized over a diverse population. ^ It has been estimated that 27% of Houstonians use public transportation. The city layout allows bus routes to run like veins connecting even the most diverse of populations within the metropolitan area. Secondary data analysis of frequent bus use (defined as riding a route weekly) among TB cases was assessed for its relationship with known TB risk factors. The spatial distribution of genotypic clusters associated with bus use was assessed, along with the reported routes and epidemiologic-links among cases belonging to the identified clusters. ^ TB cases who reported frequent bus use were more likely to have demographic and social risk factors associated with poverty, immune suppression and health disparities. An equal proportion of bus riders and non-bus riders were cultured for Mycobacterium tuberculosis, yet 75% of bus riders were genotypically clustered, indicating recent transmission, compared to 56% of non-bus riders (OR=2.4, 95%CI(2.0, 2.8), p<0.001). Bus riders had a mean cluster size of 50.14 vs. 28.9 (p<0.001). Second order spatial analysis of clustered fingerprint 2 (n=122), a Beijing family cluster, revealed geographic clustering among cases based on their report of bus use. Univariate and multivariate analysis of routes reported by cases belonging to these clusters found that 10 of the 14 clusters were associated with use. Individual Metro routes, including one route servicing the local hospitals, were found to be risk factors for belonging to a cluster shown to be endemic in Houston. The routes themselves geographically connect the census tracts previously identified as having endemic TB. 78% (15/23) of Houston Metro routes investigated had one or more print groups reporting frequent use for every HTI study year. We present data on three specific but clonally related print groups and show that bus-use is clustered in time by route and is the only known link between cases in one of the three prints: print 22. (Abstract shortened by UMI.)^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Birth defects are the leading cause of infant mortality in the United States and are a major cause of lifetime disability. However, efforts to understand their causes have been hampered by a lack of population-specific data. During 1990–2004, 22 state legislatures responded to this need by proposing birth defects surveillance legislation (BDSL). The contrast between these states and those that did not pass BDSL provides an opportunity to better understand conditions associated with US public health policy diffusion. ^ This study identifies key state-specific determinants that predict: (1) the introduction of birth defects surveillance legislation (BDSL) onto states' formal legislative agenda, and (2) the successful adoption of these laws. Secondary aims were to interpret these findings in a theoretically sound framework and to incorporate evidence from three analytical approaches. ^ The study begins with a comparative case study of Texas and Oregon (states with divergent BDSL outcomes), including a review of historical documentation and content analysis of key informant interviews. After selecting and operationalizing explanatory variables suggested by the case study, Qualitative Comparative Analysis (QCA) was applied to publically available data to describe important patterns of variation among 37 states. Results from logistic regression were compared to determine whether the two methods produced consistent findings. ^ Themes emerging from the comparative case study included differing budgetary conditions and the significance of relationships within policy issue networks. However, the QCA and statistical analysis pointed to the importance of political parties and contrasting societal contexts. Notably, state policies that allow greater access to citizen-driven ballot initiatives were consistently associated with lower likelihood of introducing BDSL. ^ Methodologically, these results indicate that a case study approach, while important for eliciting valuable context-specific detail, may fail to detect the influence of overarching, systemic variables, such as party competition. However, QCA and statistical analyses were limited by a lack of existing data to operationalize policy issue networks, and thus may have downplayed the impact of personal interactions. ^ This study contributes to the field of health policy studies in three ways. First, it emphasizes the importance of collegial and consistent relationships among policy issue network members. Second, it calls attention to political party systems in predicting policy outcomes. Finally, a novel approach to interpreting state data in a theoretically significant manner (QCA) has been demonstrated.^