9 resultados para STATISTICAL INFORMATION

em DigitalCommons@The Texas Medical Center


Relevância:

60.00% 60.00%

Publicador:

Resumo:

In 1941 the Texas Legislature appropriated $500,000 to the Board of Regents of the University of Texas to establish a cancer research hospital. The M. D. Anderson Foundation offered to match the appropriation with a grant of an equal sum and to provide a permanent site in Houston. In August, 1942 the Board of Regent of the University and the Trustees of the Foundation signed an agreement to embark on this project. This institution was to be the first one in the medical center, which was incorporated in October, 1945. The Board of Trustees of the Texas Medical Center commissioned a hospital survey to: - Define the needed hospital facilities in the area - Outline an integrated program to meet these needs - Define the facilities to be constructed - Prepare general recommendations for efficient progress The Hospital Study included information about population, hospitals, and other health care and education facilities in Houston and Harris County at that time. It included projected health care needs for future populations, education needs, and facility needs. It also included detailed information on needs for chronic illnesses, a school of public health, and nursing education. This study provides valuable information about the general population and the state of medicine in Houston and Harris County in the 1940s. It gives a unique perspective on the anticipated future as civic leaders looked forward in building the city and region. This document is critical to an understanding of the Texas Medical Center, Houston and medicine as they are today. SECTIONS INCLUDE: Abstract The Abstract was a summary of the 400 page document including general information about the survey area, community medical assets, and current and projected medical needs which the Texas Medical Center should meet. The 123 recommendations were both general (e.g., 12. “That in future planning, the present auxiliary department of the larger hospitals be considered inadequate to carry an added teaching research program of any sizable scope.”) and specific (e.g., 22. That 14.3% of the total acute bed requirement be allotted for obstetric care, reflecting a bed requirement of 522 by 1950, increasing to 1,173 by 1970.”) Section I: Survey Area This section basically addressed the first objective of the survey: “define the needed hospital facilities in the area.” Based on the admission statistics of hospitals, Harris County was included in the survey, with the recognition that growth from out-lying regional areas could occur. Population characteristics and vital statistics were included, with future trends discussed. Each of the hospitals in the area and government and private health organizations, such as the City-County Welfare Board, were documented. Statistics on the facilities use and capacity were given. Eighteen recommendations and observations on the survey area were given. Section II: Community Program This section basically addressed the second objective of the survey: “outline an integrated program to meet these needs.” The information from the Survey Area section formed the basis of the plans for development of the Texas Medical Center. In this section, specific needs, such as what medical specialties were needed, the location and general organization of a medical center, and the academic aspects were outlined. Seventy-four recommendations for these plans were provided. Section III: The Texas Medical Center The third and fourth objectives are addressed. The specific facilities were listed and recommendations were made. Section IV: Special Studies: Chronic Illness The five leading causes of death (heart disease, cancer, “apoplexy”, nephritis, and tuberculosis) were identified and statistics for morbidity and mortality provided. Diagnostic, prevention and care needs were discussed. Recommendations on facilities and other solutions were made. Section IV: Special Studies: School of Public Health An overview of the state of schools of public health in the US was provided. Information on the direction and need of this special school was also provided. Recommendations on development and organization of the proposed school were made. Section IV: Special Studies: Needs and Education Facilities for Nurses Nursing education was connected with hospitals, but the changes to academic nursing programs were discussed. The needs for well-trained nurses in an expanded medical environment were anticipated to result in significant increased demands of these professionals. An overview of the current situation in the survey area and recommendations were provided. Appendix A Maps, tables and charts provide background and statistical information for the previous sections. Appendix B Detailed census data for specific areas of the survey area in the report were included. Sketches of each of the fifteen hospitals and five other health institutions showed historical information, accreditations, staff, available facilities (beds, x-ray, etc.), academic capabilities and financial information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose. To examine the association between living in proximity to Toxics Release Inventory (TRI) facilities and the incidence of childhood cancer in the State of Texas. ^ Design. This is a secondary data analysis utilizing the publicly available Toxics release inventory (TRI), maintained by the U.S. Environmental protection agency that lists the facilities that release any of the 650 TRI chemicals. Total childhood cancer cases and childhood cancer rate (age 0-14 years) by county, for the years 1995-2003 were used from the Texas cancer registry, available at the Texas department of State Health Services website. Setting: This study was limited to the children population of the State of Texas. ^ Method. Analysis was done using Stata version 9 and SPSS version 15.0. Satscan was used for geographical spatial clustering of childhood cancer cases based on county centroids using the Poisson clustering algorithm which adjusts for population density. Pictorial maps were created using MapInfo professional version 8.0. ^ Results. One hundred and twenty five counties had no TRI facilities in their region, while 129 facilities had at least one TRI facility. An increasing trend for number of facilities and total disposal was observed except for the highest category based on cancer rate quartiles. Linear regression analysis using log transformation for number of facilities and total disposal in predicting cancer rates was computed, however both these variables were not found to be significant predictors. Seven significant geographical spatial clusters of counties for high childhood cancer rates (p<0.05) were indicated. Binomial logistic regression by categorizing the cancer rate in to two groups (<=150 and >150) indicated an odds ratio of 1.58 (CI 1.127, 2.222) for the natural log of number of facilities. ^ Conclusion. We have used a unique methodology by combining GIS and spatial clustering techniques with existing statistical approaches in examining the association between living in proximity to TRI facilities and the incidence of childhood cancer in the State of Texas. Although a concrete association was not indicated, further studies are required examining specific TRI chemicals. Use of this information can enable the researchers and public to identify potential concerns, gain a better understanding of potential risks, and work with industry and government to reduce toxic chemical use, disposal or other releases and the risks associated with them. TRI data, in conjunction with other information, can be used as a starting point in evaluating exposures and risks. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Standard methods for testing safety data are needed to ensure the safe conduct of clinical trials. In particular, objective rules for reliably identifying unsafe treatments need to be put into place to help protect patients from unnecessary harm. DMCs are uniquely qualified to evaluate accumulating unblinded data and make recommendations about the continuing safe conduct of a trial. However, it is the trial leadership who must make the tough ethical decision about stopping a trial, and they could benefit from objective statistical rules that help them judge the strength of evidence contained in the blinded data. We design early stopping rules for harm that act as continuous safety screens for randomized controlled clinical trials with blinded treatment information, which could be used by anyone, including trial investigators (and trial leadership). A Bayesian framework, with emphasis on the likelihood function, is used to allow for continuous monitoring without adjusting for multiple comparisons. Close collaboration between the statistician and the clinical investigators will be needed in order to design safety screens with good operating characteristics. Though the math underlying this procedure may be computationally intensive, implementation of the statistical rules will be easy and the continuous screening provided will give suitably early warning when real problems were to emerge. Trial investigators and trial leadership need these safety screens to help them to effectively monitor the ongoing safe conduct of clinical trials with blinded data.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In population studies, most current methods focus on identifying one outcome-related SNP at a time by testing for differences of genotype frequencies between disease and healthy groups or among different population groups. However, testing a great number of SNPs simultaneously has a problem of multiple testing and will give false-positive results. Although, this problem can be effectively dealt with through several approaches such as Bonferroni correction, permutation testing and false discovery rates, patterns of the joint effects by several genes, each with weak effect, might not be able to be determined. With the availability of high-throughput genotyping technology, searching for multiple scattered SNPs over the whole genome and modeling their joint effect on the target variable has become possible. Exhaustive search of all SNP subsets is computationally infeasible for millions of SNPs in a genome-wide study. Several effective feature selection methods combined with classification functions have been proposed to search for an optimal SNP subset among big data sets where the number of feature SNPs far exceeds the number of observations. ^ In this study, we take two steps to achieve the goal. First we selected 1000 SNPs through an effective filter method and then we performed a feature selection wrapped around a classifier to identify an optimal SNP subset for predicting disease. And also we developed a novel classification method-sequential information bottleneck method wrapped inside different search algorithms to identify an optimal subset of SNPs for classifying the outcome variable. This new method was compared with the classical linear discriminant analysis in terms of classification performance. Finally, we performed chi-square test to look at the relationship between each SNP and disease from another point of view. ^ In general, our results show that filtering features using harmononic mean of sensitivity and specificity(HMSS) through linear discriminant analysis (LDA) is better than using LDA training accuracy or mutual information in our study. Our results also demonstrate that exhaustive search of a small subset with one SNP, two SNPs or 3 SNP subset based on best 100 composite 2-SNPs can find an optimal subset and further inclusion of more SNPs through heuristic algorithm doesn't always increase the performance of SNP subsets. Although sequential forward floating selection can be applied to prevent from the nesting effect of forward selection, it does not always out-perform the latter due to overfitting from observing more complex subset states. ^ Our results also indicate that HMSS as a criterion to evaluate the classification ability of a function can be used in imbalanced data without modifying the original dataset as against classification accuracy. Our four studies suggest that Sequential Information Bottleneck(sIB), a new unsupervised technique, can be adopted to predict the outcome and its ability to detect the target status is superior to the traditional LDA in the study. ^ From our results we can see that the best test probability-HMSS for predicting CVD, stroke,CAD and psoriasis through sIB is 0.59406, 0.641815, 0.645315 and 0.678658, respectively. In terms of group prediction accuracy, the highest test accuracy of sIB for diagnosing a normal status among controls can reach 0.708999, 0.863216, 0.639918 and 0.850275 respectively in the four studies if the test accuracy among cases is required to be not less than 0.4. On the other hand, the highest test accuracy of sIB for diagnosing a disease among cases can reach 0.748644, 0.789916, 0.705701 and 0.749436 respectively in the four studies if the test accuracy among controls is required to be at least 0.4. ^ A further genome-wide association study through Chi square test shows that there are no significant SNPs detected at the cut-off level 9.09451E-08 in the Framingham heart study of CVD. Study results in WTCCC can only detect two significant SNPs that are associated with CAD. In the genome-wide study of psoriasis most of top 20 SNP markers with impressive classification accuracy are also significantly associated with the disease through chi-square test at the cut-off value 1.11E-07. ^ Although our classification methods can achieve high accuracy in the study, complete descriptions of those classification results(95% confidence interval or statistical test of differences) require more cost-effective methods or efficient computing system, both of which can't be accomplished currently in our genome-wide study. We should also note that the purpose of this study is to identify subsets of SNPs with high prediction ability and those SNPs with good discriminant power are not necessary to be causal markers for the disease.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most studies of differential gene-expressions have been conducted between two given conditions. The two-condition experimental (TCE) approach is simple in that all genes detected display a common differential expression pattern responsive to a common two-condition difference. Therefore, the genes that are differentially expressed under the other conditions other than the given two conditions are undetectable with the TCE approach. In order to address the problem, we propose a new approach called multiple-condition experiment (MCE) without replication and develop corresponding statistical methods including inference of pairs of conditions for genes, new t-statistics, and a generalized multiple-testing method for any multiple-testing procedure via a control parameter C. We applied these statistical methods to analyze our real MCE data from breast cancer cell lines and found that 85 percent of gene-expression variations were caused by genotypic effects and genotype-ANAX1 overexpression interactions, which agrees well with our expected results. We also applied our methods to the adenoma dataset of Notterman et al. and identified 93 differentially expressed genes that could not be found in TCE. The MCE approach is a conceptual breakthrough in many aspects: (a) many conditions of interests can be conducted simultaneously; (b) study of association between differential expressions of genes and conditions becomes easy; (c) it can provide more precise information for molecular classification and diagnosis of tumors; (d) it can save lot of experimental resources and time for investigators.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Making healthcare comprehensive and more efficient remains a complex challenge. Health Information Technology (HIT) is recognized as an important component of this transformation but few studies describe HIT adoption and it's effect on the bedside experience by physicians, staff and patients. This study applied descriptive statistics and correlation analysis to data from the Patient-Centered Medical Home National Demonstration Project (NDP) of the American Academy of Family Physicians. Thirty-six clinics were followed for 26 months by clinician/staff questionnaires and patient surveys. This study characterizes those clinics as well as staff and patient perspectives on HIT usefulness, the doctor-patient relationship, electronic medical record (EMR) implementation, and computer connections in the practice throughout the study. The Global Practice Experience factor, a composite score related to key components of primary care, was then correlated to clinician and patient perspectives. This study found wide adoption of HIT among NDP practices. Patient perspectives on HIT helpfulness on the doctor-patient showed a suggestive trend that approached statistical significance (p = 0.172). Clinicians and staff noted successful integration of EMR into clinic workflow and their perception of helpfulness to the doctor-patient relationship show a suggestive increase also approaching statistical significance (p=0.06). GPE was correlated with clinician/staff assessment of a helpful doctor-patient relationship midway through the study (R 0.460, p = 0.021) with the remaining time points nearing statistical significance. GPE was also correlated to both patient perspectives of EMR helpfulness in the doctor-patient relationship (R 0.601, p = 0.001) and computer connections (R 0.618, p = 0.0001) at the start of the study. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In light of the new healthcare regulations, hospitals are increasingly reevaluating their IT integration strategies to meet expanded healthcare information exchange requirements. Nevertheless, hospital executives do not have all the information they need to differentiate between the available strategies and recognize what may better fit their organizational needs. ^ In the interest of providing the desired information, this study explored the relationships between hospital financial performance, integration strategy selection, and strategy change. The integration strategies examined – applied as binary logistic regression dependent variables and in the order from most to least integrated – were Single-Vendor (SV), Best-of-Suite (BoS), and Best-of-Breed (BoB). In addition, the financial measurements adopted as independent variables for the models were two administrative labor efficiency and six industry standard financial ratios designed to provide a broad proxy of hospital financial performance. Furthermore, descriptive statistical analyses were carried out to evaluate recent trends in hospital integration strategy change. Overall six research questions were proposed for this study. ^ The first research question sought to answer if financial performance was related to the selection of integration strategies. The next questions, however, explored whether hospitals were more likely to change strategies or remain the same when there was no external stimulus to change, and if they did change, they would prefer strategies closer to the existing ones. These were followed by a question that inquired if financial performance was also related to strategy change. Nevertheless, rounding up the questions, the last two probed if the new Health Information Technology for Economic and Clinical Health (HITECH) Act had any impact on the frequency and direction of strategy change. ^ The results confirmed that financial performance is related to both IT integration strategy selection and strategy change, while concurred with prior studies that suggested hospital and environmental characteristics are associated factors as well. Specifically this study noted that the most integrated SV strategy is related to increased administrative labor efficiency and the hybrid BoS strategy is associated with improved financial health (based on operating margin and equity financing ratios). On the other hand, no financial indicators were found to be related to the least integrated BoB strategy, except for short-term liquidity (current ratio) when involving strategy change. ^ Ultimately, this study concluded that when making IT integration strategy decisions hospitals closely follow the resource dependence view of minimizing uncertainty. As each integration strategy may favor certain organizational characteristics, hospitals traditionally preferred not to make strategy changes and when they did, they selected strategies that were more closely related to the existing ones. However, as new regulations further heighten revenue uncertainty while require increased information integration, moving forward, as evidence already suggests a growing trend of organizations shifting towards more integrated strategies, hospitals may be more limited in their strategy selection choices.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A population based ecological study was conducted to identify areas with a high number of TB and HIV new diagnoses in Harris County, Texas from 2009 through 2010 by applying Geographic Information Systems to determine whether distinguished spatial patterns exist at the census tract level through the use of exploratory mapping. As of 2010, Texas has the fourth highest occurrence of new diagnoses of HIV/AIDS and TB.[31] The Texas Department of State Health Services (DSHS) has identified HIV infected persons as a high risk population for TB in Harris County.[29] In order to explore this relationship further, GIS was utilized to identify spatial trends. ^ The specific aims were to map TB and HIV new diagnoses rates and spatially identify hotspots and high value clusters at the census tract level. The potential association between HIV and TB was analyzed using spatial autocorrelation and linear regression analysis. The spatial statistics used were ArcGIS 9.3 Hotspot Analysis and Cluster and Outlier Analysis. Spatial autocorrelation was determined through Global Moran's I and linear regression analysis. ^ Hotspots and clusters of TB and HIV are located within the same spatial areas of Harris County. The areas with high value clusters and hotspots for each infection are located within the central downtown area of the city of Houston. There is an additional hotspot area of TB located directly north of I-10 and a hotspot area of HIV northeast of Interstate 610. ^ The Moran's I Index of 0.17 (Z score = 3.6 standard deviations, p-value = 0.01) suggests that TB is statistically clustered with a less than 1% chance that this pattern is due to random chance. However, there were a high number of features with no neighbors which may invalidate the statistical properties of the test. Linear regression analysis indicated that HIV new diagnoses rates (β=−0.006, SE=0.147, p=0.970) and census tracts (β=0.000, SE=0.000, p=0.866) were not significant predictors of TB new diagnoses rates. ^ Mapping products indicate that census tracts with overlapping hotspots and high value clusters of TB and HIV should be a targeted focus for prevention efforts, most particularly within central Harris County. While the statistical association was not confirmed, evidence suggests that there is a relationship between HIV and TB within this two year period.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Complex diseases such as cancer result from multiple genetic changes and environmental exposures. Due to the rapid development of genotyping and sequencing technologies, we are now able to more accurately assess causal effects of many genetic and environmental factors. Genome-wide association studies have been able to localize many causal genetic variants predisposing to certain diseases. However, these studies only explain a small portion of variations in the heritability of diseases. More advanced statistical models are urgently needed to identify and characterize some additional genetic and environmental factors and their interactions, which will enable us to better understand the causes of complex diseases. In the past decade, thanks to the increasing computational capabilities and novel statistical developments, Bayesian methods have been widely applied in the genetics/genomics researches and demonstrating superiority over some regular approaches in certain research areas. Gene-environment and gene-gene interaction studies are among the areas where Bayesian methods may fully exert its functionalities and advantages. This dissertation focuses on developing new Bayesian statistical methods for data analysis with complex gene-environment and gene-gene interactions, as well as extending some existing methods for gene-environment interactions to other related areas. It includes three sections: (1) Deriving the Bayesian variable selection framework for the hierarchical gene-environment and gene-gene interactions; (2) Developing the Bayesian Natural and Orthogonal Interaction (NOIA) models for gene-environment interactions; and (3) extending the applications of two Bayesian statistical methods which were developed for gene-environment interaction studies, to other related types of studies such as adaptive borrowing historical data. We propose a Bayesian hierarchical mixture model framework that allows us to investigate the genetic and environmental effects, gene by gene interactions (epistasis) and gene by environment interactions in the same model. It is well known that, in many practical situations, there exists a natural hierarchical structure between the main effects and interactions in the linear model. Here we propose a model that incorporates this hierarchical structure into the Bayesian mixture model, such that the irrelevant interaction effects can be removed more efficiently, resulting in more robust, parsimonious and powerful models. We evaluate both of the 'strong hierarchical' and 'weak hierarchical' models, which specify that both or one of the main effects between interacting factors must be present for the interactions to be included in the model. The extensive simulation results show that the proposed strong and weak hierarchical mixture models control the proportion of false positive discoveries and yield a powerful approach to identify the predisposing main effects and interactions in the studies with complex gene-environment and gene-gene interactions. We also compare these two models with the 'independent' model that does not impose this hierarchical constraint and observe their superior performances in most of the considered situations. The proposed models are implemented in the real data analysis of gene and environment interactions in the cases of lung cancer and cutaneous melanoma case-control studies. The Bayesian statistical models enjoy the properties of being allowed to incorporate useful prior information in the modeling process. Moreover, the Bayesian mixture model outperforms the multivariate logistic model in terms of the performances on the parameter estimation and variable selection in most cases. Our proposed models hold the hierarchical constraints, that further improve the Bayesian mixture model by reducing the proportion of false positive findings among the identified interactions and successfully identifying the reported associations. This is practically appealing for the study of investigating the causal factors from a moderate number of candidate genetic and environmental factors along with a relatively large number of interactions. The natural and orthogonal interaction (NOIA) models of genetic effects have previously been developed to provide an analysis framework, by which the estimates of effects for a quantitative trait are statistically orthogonal regardless of the existence of Hardy-Weinberg Equilibrium (HWE) within loci. Ma et al. (2012) recently developed a NOIA model for the gene-environment interaction studies and have shown the advantages of using the model for detecting the true main effects and interactions, compared with the usual functional model. In this project, we propose a novel Bayesian statistical model that combines the Bayesian hierarchical mixture model with the NOIA statistical model and the usual functional model. The proposed Bayesian NOIA model demonstrates more power at detecting the non-null effects with higher marginal posterior probabilities. Also, we review two Bayesian statistical models (Bayesian empirical shrinkage-type estimator and Bayesian model averaging), which were developed for the gene-environment interaction studies. Inspired by these Bayesian models, we develop two novel statistical methods that are able to handle the related problems such as borrowing data from historical studies. The proposed methods are analogous to the methods for the gene-environment interactions on behalf of the success on balancing the statistical efficiency and bias in a unified model. By extensive simulation studies, we compare the operating characteristics of the proposed models with the existing models including the hierarchical meta-analysis model. The results show that the proposed approaches adaptively borrow the historical data in a data-driven way. These novel models may have a broad range of statistical applications in both of genetic/genomic and clinical studies.