975 resultados para Medical application
Resumo:
Health departments, research institutions, policy-makers, and healthcare providers are often interested in knowing the health status of their clients/constituents. Without the resources, financially or administratively, to go out into the community and conduct health assessments directly, these entities frequently rely on data from population-based surveys to supply the information they need. Unfortunately, these surveys are ill-equipped for the job due to sample size and privacy concerns. Small area estimation (SAE) techniques have excellent potential in such circumstances, but have been underutilized in public health due to lack of awareness and confidence in applying its methods. The goal of this research is to make model-based SAE accessible to a broad readership using clear, example-based learning. Specifically, we applied the principles of multilevel, unit-level SAE to describe the geographic distribution of HPV vaccine coverage among females aged 11-26 in Texas.^ Multilevel (3 level: individual, county, public health region) random-intercept logit models of HPV vaccination (receipt of ≥ 1 dose Gardasil® ) were fit to data from the 2008 Behavioral Risk Factor Surveillance System (outcome and level 1 covariates) and a number of secondary sources (group-level covariates). Sampling weights were scaled (level 1) or constructed (levels 2 & 3), and incorporated at every level. Using the regression coefficients (and standard errors) from the final models, I simulated 10,000 datasets for each regression coefficient from the normal distribution and applied them to the logit model to estimate HPV vaccine coverage in each county and respective demographic subgroup. For simplicity, I only provide coverage estimates (and 95% confidence intervals) for counties.^ County-level coverage among females aged 11-17 varied from 6.8-29.0%. For females aged 18-26, coverage varied from 1.9%-23.8%. Aggregated to the state level, these values translate to indirect state estimates of 15.5% and 11.4%, respectively; both of which fall within the confidence intervals for the direct estimates of HPV vaccine coverage in Texas (Females 11-17: 17.7%, 95% CI: 13.6, 21.9; Females 18-26: 12.0%, 95% CI: 6.2, 17.7).^ Small area estimation has great potential for informing policy, program development and evaluation, and the provision of health services. Harnessing the flexibility of multilevel, unit-level SAE to estimate HPV vaccine coverage among females aged 11-26 in Texas counties, I have provided (1) practical guidance on how to conceptualize and conduct modelbased SAE, (2) a robust framework that can be applied to other health outcomes or geographic levels of aggregation, and (3) HPV vaccine coverage data that may inform the development of health education programs, the provision of health services, the planning of additional research studies, and the creation of local health policies.^
Resumo:
The primary interest was in predicting the distribution runs in a sequence of Bernoulli trials. Difference equation techniques were used to express the number of runs of a given length k in n trials under three assumptions (1) no runs of length greater than k, (2) no runs of length less than k, (3) no other assumptions about the length of runs. Generating functions were utilized to obtain the distributions of the future number of runs, future number of minimum run lengths and future number of the maximum run lengths unconditional on the number of successes and failures in the Bernoulli sequence. When applying the model to Texas hydrology data, the model provided an adequate fit for the data in eight of the ten regions. Suggested health applications of this approach to run theory are provided. ^
Resumo:
The standard analyses of survival data involve the assumption that survival and censoring are independent. When censoring and survival are related, the phenomenon is known as informative censoring. This paper examines the effects of an informative censoring assumption on the hazard function and the estimated hazard ratio provided by the Cox model.^ The limiting factor in all analyses of informative censoring is the problem of non-identifiability. Non-identifiability implies that it is impossible to distinguish a situation in which censoring and death are independent from one in which there is dependence. However, it is possible that informative censoring occurs. Examination of the literature indicates how others have approached the problem and covers the relevant theoretical background.^ Three models are examined in detail. The first model uses conditionally independent marginal hazards to obtain the unconditional survival function and hazards. The second model is based on the Gumbel Type A method for combining independent marginal distributions into bivariate distributions using a dependency parameter. Finally, a formulation based on a compartmental model is presented and its results described. For the latter two approaches, the resulting hazard is used in the Cox model in a simulation study.^ The unconditional survival distribution formed from the first model involves dependency, but the crude hazard resulting from this unconditional distribution is identical to the marginal hazard, and inferences based on the hazard are valid. The hazard ratios formed from two distributions following the Gumbel Type A model are biased by a factor dependent on the amount of censoring in the two populations and the strength of the dependency of death and censoring in the two populations. The Cox model estimates this biased hazard ratio. In general, the hazard resulting from the compartmental model is not constant, even if the individual marginal hazards are constant, unless censoring is non-informative. The hazard ratio tends to a specific limit.^ Methods of evaluating situations in which informative censoring is present are described, and the relative utility of the three models examined is discussed. ^
Resumo:
In Part One, the foundations of Bayesian inference are reviewed, and the technicalities of the Bayesian method are illustrated. Part Two applies the Bayesian meta-analysis program, the Confidence Profile Method (CPM), to clinical trial data and evaluates the merits of using Bayesian meta-analysis for overviews of clinical trials.^ The Bayesian method of meta-analysis produced similar results to the classical results because of the large sample size, along with the input of a non-preferential prior probability distribution. These results were anticipated through explanations in Part One of the mechanics of the Bayesian approach. ^
Resumo:
In the midst of health care reform, and as health care organizations reorganize to provide more cost-effective healthcare, the population is being shifted into new healthcare delivery systems such as health insurance purchasing alliances, and health maintenance organizations. These new models of delivery are usually organized within resource restricted and data limited environments. Health care planners are faced with the challenge of identifying priorities for preventive and primary care services within these newly organized populations (Medicare HMO, Medicaid HMO, etc.). The author proposes a technique usually employed in epidemiology--attributable risk estimation--as a planning methodology to establish preventive health priorities within newly organized populations. Illustrations of the methodology are provided utilizing the Texas 1992 population. ^
Resumo:
In geographical epidemiology, maps of disease rates and disease risk provide a spatial perspective for researching disease etiology. For rare diseases or when the population base is small, the rate and risk estimates may be unstable. Empirical Bayesian (EB) methods have been used to spatially smooth the estimates by permitting an area estimate to "borrow strength" from its neighbors. Such EB methods include the use of a Gamma model, of a James-Stein estimator, and of a conditional autoregressive (CAR) process. A fully Bayesian analysis of the CAR process is proposed. One advantage of this fully Bayesian analysis is that it can be implemented simply by using repeated sampling from the posterior densities. Use of a Markov chain Monte Carlo technique such as Gibbs sampler was not necessary. Direct resampling from the posterior densities provides exact small sample inferences instead of the approximate asymptotic analyses of maximum likelihood methods (Clayton & Kaldor, 1987). Further, the proposed CAR model provides for covariates to be included in the model. A simulation demonstrates the effect of sample size on the fully Bayesian analysis of the CAR process. The methods are applied to lip cancer data from Scotland, and the results are compared. ^
Resumo:
Path analysis has been applied to components of the iron metabolic system with the intent of suggesting an integrated procedure for better evaluating iron nutritional status at the community level. The primary variables of interest in this study were (1) iron stores, (2) total iron-binding capacity, (3) serum ferritin, (4) serum iron, (5) transferrin saturation, and (6) hemoglobin concentration. Correlation coefficients for relationships among these variables were obtained from published literature and postulated in a series of models using measures of those variables that are feasible to include in a community nutritional survey. Models were built upon known information about the metabolism of iron and were limited by what had been reported in the literature in terms of correlation coefficients or quantitative relationships. Data were pooled from various studies and correlations of the same bivariate relationships were averaged after z- transformations. Correlation matrices were then constructed by transforming the average values back into correlation coefficients. The results of path analysis in this study indicate that hemoglobin is not a good indicator of early iron deficiency. It does not account for variance in iron stores. On the other hand, 91% of the variance in iron stores is explained by serum ferritin and total iron-binding capacity. In addition, the magnitude of the path coefficient (.78) of the serum ferritin-iron stores relationship signifies that serum ferritin is the most important predictor of iron stores in the proposed model. Finally, drawing upon known relations among variables and the amount of variance explained in path models, it is suggested that the following blood measures should be made in assessing community iron deficiency: (1) serum ferritin, (2) total iron-binding capacity, (3) serum iron, (4) transferrin saturation, and (5) hemoglobin concentration. These measures (with acceptable ranges and cut-off points) could make possible the complete evaluation of all three stages of iron deficiency in those persons surveyed at the community level. ^
Resumo:
Two respirable coal fly ash samples ((LESSTHEQ) 3(mu)m), one from a pressurized fluidized-bed combustion miniplant and one from a conventional combustion power plant, were investigated for physical properties, chemical composition and biological activity. Electron microscopy illustrated irregularity in fluidized-bed combustion fly ash and sphericity in conventional combustion fly ash. Elemental analysis of these samples showed differences in trace elements. Both fly ash samples were toxic in rabbit alveolar macrophage and Chinese hamster ovary cell systems in vitro. The macrophages were more sensitive to toxicity of fly ash than the ovary cells. For measuring the cytotoxicity of fly ash, the most sensitive parameters were adenosine triphosphate in the alveolar macrophage system and viability index in the hamster ovary system. Intact fluidized-bed combustion fly-ash particles showed mutagenicity only in strains TA98 and TA1538 without metabolic activation in the Ames Salmonella assay. No mutagenicity was detected in bioassay of conventional combustion fly ash particles. Solvent extraction yielded more mass from fluidized-bed combustion fly ash than from conventional combustion fly ash. The extracts of fluidized-bed combustion fly ash showed higher mutagenic activity than conventional combustion fly ash. These samples contained direct-acting, frameshift mutagens.^ Fly ash samples collected from the same fluidized-bed source by cyclones, a fabric filter, and a electrostatic precipitator at various temperatures were compared for particle size, toxicity, and mutagenicity. Results demonstrated that the biological activity of coal fly ash were affected by the collection site, device, and temperature.^ Coal fly ash vapor-coated with 1-nitropyrene was developed as a model system to study the bioavailability and recovery of nitroaromatic compounds in fly ash. The effects of vapor deposition on toxicity and mutagenicity of fly ash were examined. The nitropyrene coating did not significantly alter the ash's cytotoxicity. Nitropyrene was bioavailable in the biological media, and a significant percentage was not recovered after the coated fly ash was cultured with alveolar macrophages. 1-Nitropyrene loss increased as the number of macrophages was increased, suggesting that the macrophages are capable of metabolizing or binding 1-nitropyrene present in coal fly ash. ^
Resumo:
An experimental procedure was developed using the Brainstem Evoked Response (BER) electrophysiological technique to assess the effect of neurotoxic substances on the auditory system. The procedure utilizes Sprague-Dawley albino rats who have had dural electrodes implanted in their skulls, allowing neuroelectric evoked potentials to be recorded from their brainstems. Latency and amplitude parameters derived from the evoked potentials help assess the neuroanatomical integrity of the auditory pathway in the brainstem. Moreover, since frequency-specific auditory stimuli are used to evoke the neural responses, additional audiometric information is obtainable. An investigation on non-exposed control animals shows the BER threshold curve obtained by tests at various frequencies very closely approximates that obtained by behavioral audibility tests. Thus, the BER appears to be a valid measure of both functional and neuroanatomical integrity of the afferent auditory neural pathway.^ To determine the usefulness of the BER technique in neurobehavioral toxicology research, a known neurotoxic agent, Pb, was studied. Female Sprague-Dawley rats were dosed for 45 days with low levels of Pb acetate in their drinking water, after which BER recordings were obtained. The Pb dosages were determined from the findings of an earlier pilot study. One group of 6 rats received normal tap water, one group of 7 rats received a solution of 0.1% Pb, and another group of 7 rats received a solution of 0.2% Pb. After 45 days, the three groups exhibited blood Pb levels of 4.5 (+OR-) 0.43 (mu)g/100 ml, 37.8 (+OR-) 4.8 (mu)g/100 ml and 47.3 (+OR-) 2.7 (mu)g/100 ml, respectively.^ The results of the BER recording indicated evoked response waveform latency abnormalities in both the Pb-treated groups when midrange frequency (8 kHz to 32 kHz) stimuli were used. For the most part, waveform amplitudes did not vary significantly from control values. BER recordings obtained after a 30-day recovery period indicated the effects seen in the 0.1% Pb group had disappeared. However, those anomalies exhibited by the 0.2% Pb group either remained or increased in number. This outcome indicates a longer lasting or possibly irreversible effect on the auditory system from the higher dose of Pb. The auditory pathway effect appears to be in the periphery, at the level of the cochlea or the auditory (VIII) nerve. The results of this research indicate the BER technique is a valuable and sensitive indicator of low-level toxic effects on the auditory system.^
Resumo:
This study provides a review of the current alcoholism planning process of the Houston-Galveston planning process of the Houston-Galveston Area Council, an agency carrying out planning for a thirteen county region in surrounding Houston, Texas. The four central groups involved in this planning are identified, and the role that each plays and how it effects the planning outcomes is discussed.^ The most substantive outcome of the Houston-Galveston Area Council's alcoholism planning, the Regional Alcoholism/Alcohol Abuse Plan is examined. Many of the shortcomings in the data provided, and the lack of other data necessary for planning are offered.^ A problem oriented planning model is presented as an alternative to the Houston-Galveston Area Council's current service oriented approach to alcoholism planning. Five primary phases of the model, identification of the problem, statement of objectives, selection of alternative programs, implementation, and evaluation, are presented, and an overview of the tasks involved in the application of this model to alcoholism planning is offered.^ A specific aspect of the model, the use of problem status indicators is explored using cirrhosis and suicide mortality data. A review of the literature suggests that based on five criteria, availability, subgroup identification, validity, reliability, and sensitivity, both suicide and cirrhosis are suitable as indicators of the alcohol problem when combined with other indicators.^ Cirrhosis and suicide mortality data are examined for the thirteen county Houston-Galveston Region for the years 1969 through 1976. Data limitations preclude definite conclusions concerning the alcohol problem in the region. Three hypotheses about the nature of the regional alcohol problem are presented. First, there appears to be no linear trend in the number of alcoholics that are at risk of suicide and cirrhosis mortality. Second, the number of alcoholics in the metropolitan areas seems to be greater than the number of rural areas. Third, the number of male alcoholics at risk of cirrhosis and suicide mortality is greater than the number of female alcoholics.^
Resumo:
Traditional comparison of standardized mortality ratios (SMRs) can be misleading if the age-specific mortality ratios are not homogeneous. For this reason, a regression model has been developed which incorporates the mortality ratio as a function of age. This model is then applied to mortality data from an occupational cohort study. The nature of the occupational data necessitates the investigation of mortality ratios which increase with age. These occupational data are used primarily to illustrate and develop the statistical methodology.^ The age-specific mortality ratio (MR) for the covariates of interest can be written as MR(,ij...m) = ((mu)(,ij...m)/(theta)(,ij...m)) = r(.)exp (Z('')(,ij...m)(beta)) where (mu)(,ij...m) and (theta)(,ij...m) denote the force of mortality in the study and chosen standard populations in the ij...m('th) stratum, respectively, r is the intercept, Z(,ij...m) is the vector of covariables associated with the i('th) age interval, and (beta) is a vector of regression coefficients associated with these covariables. A Newton-Raphson iterative procedure has been used for determining the maximum likelihood estimates of the regression coefficients.^ This model provides a statistical method for a logical and easily interpretable explanation of an occupational cohort mortality experience. Since it gives a reasonable fit to the mortality data, it can also be concluded that the model is fairly realistic. The traditional statistical method for the analysis of occupational cohort mortality data is to present a summary index such as the SMR under the assumption of constant (homogeneous) age-specific mortality ratios. Since the mortality ratios for occupational groups usually increase with age, the homogeneity assumption of the age-specific mortality ratios is often untenable. The traditional method of comparing SMRs under the homogeneity assumption is a special case of this model, without age as a covariate.^ This model also provides a statistical technique to evaluate the relative risk between two SMRs or a dose-response relationship among several SMRs. The model presented has application in the medical, demographic and epidemiologic areas. The methods developed in this thesis are suitable for future analyses of mortality or morbidity data when the age-specific mortality/morbidity experience is a function of age or when there is an interaction effect between confounding variables needs to be evaluated. ^
Resumo:
In November 2010, nearly 110,000 people in the United States were waiting for organs for transplantation. Despite the fact that the organ donor registration rate has doubled in the last year, Texas has the lowest registration rate in the nation. Due to the need for improved registration rates in Texas, this practice-based culminating experience was to write an application for federal funding for the central Texas organ procurement organization, Texas Organ Sharing Alliance. The culminating experience has two levels of significance for public health – (1) to engage in an activity to promote organ donation registration, and (2) to provide professional experience in grant writing. ^ The process began with a literature review. The review was to identify successful intervention activities in motivating organ donation registration that could be used in intervention design for the grant application. Conclusions derived from the literature review included (1) the need to specifically encourage family discussions, (2) religious and community leaders can be leveraged to facilitate organ donation conversations in families, (3) communication content must be culturally sensitive and (4) ethnic disparities in transplantation must be acknowledged and discussed.^ Post the literature review; the experience followed a five step process of developing the grant application. The steps included securing permission to proceed, assembling a project team, creation of a project plan and timeline, writing each element of the grant application including the design of proposed intervention activities, and completion of the federal grant application. ^ After the grant application was written, an evaluation of the grant writing process was conducted. Opportunities for improvement were identified. The first opportunity was the need for better timeline management to allow for review of the application by an independent party, iterative development of the budget proposal, and development of collaborative partnerships. Another improvement opportunity was the management of conflict regarding the design of the intervention that stemmed from marketing versus evidence-based approaches. The most important improvement opportunity was the need to develop a more exhaustive evaluation plan.^ Eight supplementary files are attached to appendices: Feasibility Discussion in Appendix 1, Grant Guidance and Workshop Notes in Appendix 2, Presentation to Texas Organ Sharing Alliance in Appendix 3, Team Recruitment Presentation in Appendix 5, Grant Project Narrative in Appendix 7, Federal Application Form in Appendix 8, and Budget Workbook with Budget Narrative in Appendix 9.^
Resumo:
Background and Objective. Ever since the human development index was published in 1990 by the United Nations Development Programme (UNDP), many researchers started searching and corporative studying for more effective methods to measure the human development. Published in 1999, Lai’s “Temporal analysis of human development indicators: principal component approach” provided a valuable statistical way on human developmental analysis. This study presented in the thesis is the extension of Lai’s 1999 research. ^ Methods. I used the weighted principal component method on the human development indicators to measure and analyze the progress of human development in about 180 countries around the world from the year 1999 to 2010. The association of the main principal component obtained from the study and the human development index reported by the UNDP was estimated by the Spearman’s rank correlation coefficient. The main principal component was then further applied to quantify the temporal changes of the human development of selected countries by the proposed Z-test. ^ Results. The weighted means of all three human development indicators, health, knowledge, and standard of living, were increased from 1999 to 2010. The weighted standard deviation for GDP per capita was also increased across years indicated the rising inequality of standard of living among countries. The ranking of low development countries by the main principal component (MPC) is very similar to that by the human development index (HDI). Considerable discrepancy between MPC and HDI ranking was found among high development countries with high GDP per capita shifted to higher ranks. The Spearman’s rank correlation coefficient between the main principal component and the human development index were all around 0.99. All the above results were very close to outcomes in Lai’s 1999 report. The Z test result on temporal analysis of main principal components from 1999 to 2010 on Qatar was statistically significant, but not on other selected countries, such as Brazil, Russia, India, China, and U.S.A.^ Conclusion. To synthesize the multi-dimensional measurement of human development into a single index, the weighted principal component method provides a good model by using the statistical tool on a comprehensive ranking and measurement. Since the weighted main principle component index is more objective because of using population of nations as weight, more effective when the analysis is across time and space, and more flexible when the countries reported to the system has been changed year after year. Thus, in conclusion, the index generated by using weighted main principle component has some advantage over the human development index created in UNDP reports.^