9 resultados para parametric estimate
em DigitalCommons@The Texas Medical Center
Resumo:
Prevalent sampling is an efficient and focused approach to the study of the natural history of disease. Right-censored time-to-event data observed from prospective prevalent cohort studies are often subject to left-truncated sampling. Left-truncated samples are not randomly selected from the population of interest and have a selection bias. Extensive studies have focused on estimating the unbiased distribution given left-truncated samples. However, in many applications, the exact date of disease onset was not observed. For example, in an HIV infection study, the exact HIV infection time is not observable. However, it is known that the HIV infection date occurred between two observable dates. Meeting these challenges motivated our study. We propose parametric models to estimate the unbiased distribution of left-truncated, right-censored time-to-event data with uncertain onset times. We first consider data from a length-biased sampling, a specific case in left-truncated samplings. Then we extend the proposed method to general left-truncated sampling. With a parametric model, we construct the full likelihood, given a biased sample with unobservable onset of disease. The parameters are estimated through the maximization of the constructed likelihood by adjusting the selection bias and unobservable exact onset. Simulations are conducted to evaluate the finite sample performance of the proposed methods. We apply the proposed method to an HIV infection study, estimating the unbiased survival function and covariance coefficients. ^
Resumo:
Calcium levels in spines play a significant role in determining the sign and magnitude of synaptic plasticity. The magnitude of calcium influx into spines is highly dependent on influx through N-methyl D-aspartate (NMDA) receptors, and therefore depends on the number of postsynaptic NMDA receptors in each spine. We have calculated previously how the number of postsynaptic NMDA receptors determines the mean and variance of calcium transients in the postsynaptic density, and how this alters the shape of plasticity curves. However, the number of postsynaptic NMDA receptors in the postsynaptic density is not well known. Anatomical methods for estimating the number of NMDA receptors produce estimates that are very different than those produced by physiological techniques. The physiological techniques are based on the statistics of synaptic transmission and it is difficult to experimentally estimate their precision. In this paper we use stochastic simulations in order to test the validity of a physiological estimation technique based on failure analysis. We find that the method is likely to underestimate the number of postsynaptic NMDA receptors, explain the source of the error, and re-derive a more precise estimation technique. We also show that the original failure analysis as well as our improved formulas are not robust to small estimation errors in key parameters.
Resumo:
Introduction. It has been well established that poor uninsured children lack access to dental care and have greater dental needs than their insured counterparts. ^ Objective. To assess the capacity of Bexar County's dental safety net to treat children. To assess the dental needs of Bexar County children ages 0-18 who are uninsured or are Medicaid or SCHIP recipients. ^ Methods. Information was requested from dental safety net clinics that treat children ages 0-18. Data from the census, NHANES and other sources was used to estimate the dental needs. ^ Results. The capacity of the current safety net to treat children is 33,537 patient encounters per year. The dental needs of the community are 227,124 patient encounters per year. ^ Conclusion. The results of the study indicate that Bexar County is not prepared to treat the dental needs of the underserved children in San Antonio.^
Resumo:
Objective. To measure the demand for primary care and its associated factors by building and estimating a demand model of primary care in urban settings.^ Data source. Secondary data from 2005 California Health Interview Survey (CHIS 2005), a population-based random-digit dial telephone survey, conducted by the UCLA Center for Health Policy Research in collaboration with the California Department of Health Services, and the Public Health Institute between July 2005 and April 2006.^ Study design. A literature review was done to specify the demand model by identifying relevant predictors and indicators. CHIS 2005 data was utilized for demand estimation.^ Analytical methods. The probit regression was used to estimate the use/non-use equation and the negative binomial regression was applied to the utilization equation with the non-negative integer dependent variable.^ Results. The model included two equations in which the use/non-use equation explained the probability of making a doctor visit in the past twelve months, and the utilization equation estimated the demand for primary conditional on at least one visit. Among independent variables, wage rate and income did not affect the primary care demand whereas age had a negative effect on demand. People with college and graduate educational level were associated with 1.03 (p < 0.05) and 1.58 (p < 0.01) more visits, respectively, compared to those with no formal education. Insurance was significantly and positively related to the demand for primary care (p < 0.01). Need for care variables exhibited positive effects on demand (p < 0.01). Existence of chronic disease was associated with 0.63 more visits, disability status was associated with 1.05 more visits, and people with poor health status had 4.24 more visits than those with excellent health status. ^ Conclusions. The average probability of visiting doctors in the past twelve months was 85% and the average number of visits was 3.45. The study emphasized the importance of need variables in explaining healthcare utilization, as well as the impact of insurance, employment and education on demand. The two-equation model of decision-making, and the probit and negative binomial regression methods, was a useful approach to demand estimation for primary care in urban settings.^
Resumo:
Recent outbreaks of dengue fever (DF) along the United States/Mexico border, coupled with the high number of reported cases in Mexico suggest that there is the possibility for DF emergence in Houston, Texas1,2. To determine the presence of DF, populations of Aedes aegypti and Aedes albopictus were identified and tested for dengue virus. Maps were created to identify "hot spots" (Figure 1) based on historical data on Ae. aegypti and Ae. albopictus, demographic information, and locations of human cases of dengue fever. BG Sentinel Traps®, in conjunction with BG Lure® attractant, octanol and dry ice, were used to collect mosquitoes, which were then tested for presence of dengue virus using ELISA techniques. All samples tested were negative for dengue virus (DV). Survival of DV ultimately comes down to whether or not it will be vectored by a mosquito to a susceptible human host. The presence of infected humans and contact with the mosquito vectors are two critical factors necessary in the establishment of DF. Historical records indicate the presence of Ae. aegypti and Ae. albopictus in Harris County, which would support localized dengue transmission if infected individuals are present.^ (1) Brunkard JM, Robles-Lopez JL, Ramirez J, Cifuentes E, Rothenberg SJ, Hunsperger EA, Moore CG, Brussolo RM, Villarreal NA, Haddad BM, 2007. Dengue fever seroprevalence and risk factors, Texas-Mexico border, 2004. Emerg Infect Dis 13: 1477-1483. (2) Ramos MM, Mohammed H, Zielinski-Gutierrez E, Hayden MH, Lopez JL, Fournier M, Trujillo AR, Burton R, Brunkard JM, Anaya-Lopez L, Banicki AA, Morales PK, Smith B, Munoz JL, Waterman SH, 2008. Epidemic dengue and dengue hemorrhagic fever at the Texas-Mexico Border: results of a household-based seroepidemiologic survey, December 2005. Am J Trop Med Hyg 78: 364-369.^
Resumo:
Strategies are compared for the development of a linear regression model with stochastic (multivariate normal) regressor variables and the subsequent assessment of its predictive ability. Bias and mean squared error of four estimators of predictive performance are evaluated in simulated samples of 32 population correlation matrices. Models including all of the available predictors are compared with those obtained using selected subsets. The subset selection procedures investigated include two stopping rules, C$\sb{\rm p}$ and S$\sb{\rm p}$, each combined with an 'all possible subsets' or 'forward selection' of variables. The estimators of performance utilized include parametric (MSEP$\sb{\rm m}$) and non-parametric (PRESS) assessments in the entire sample, and two data splitting estimates restricted to a random or balanced (Snee's DUPLEX) 'validation' half sample. The simulations were performed as a designed experiment, with population correlation matrices representing a broad range of data structures.^ The techniques examined for subset selection do not generally result in improved predictions relative to the full model. Approaches using 'forward selection' result in slightly smaller prediction errors and less biased estimators of predictive accuracy than 'all possible subsets' approaches but no differences are detected between the performances of C$\sb{\rm p}$ and S$\sb{\rm p}$. In every case, prediction errors of models obtained by subset selection in either of the half splits exceed those obtained using all predictors and the entire sample.^ Only the random split estimator is conditionally (on $\\beta$) unbiased, however MSEP$\sb{\rm m}$ is unbiased on average and PRESS is nearly so in unselected (fixed form) models. When subset selection techniques are used, MSEP$\sb{\rm m}$ and PRESS always underestimate prediction errors, by as much as 27 percent (on average) in small samples. Despite their bias, the mean squared errors (MSE) of these estimators are at least 30 percent less than that of the unbiased random split estimator. The DUPLEX split estimator suffers from large MSE as well as bias, and seems of little value within the context of stochastic regressor variables.^ To maximize predictive accuracy while retaining a reliable estimate of that accuracy, it is recommended that the entire sample be used for model development, and a leave-one-out statistic (e.g. PRESS) be used for assessment. ^
Resumo:
Evaluation of a series of deaths due to a particular disease is a frequently requested task in occupational epidemiology. There are several techniques available to determine whether a series represents an occupational health problem. Each of these techniques, however, is subject to certain limitations including cost, applicability to a given situation, feasibility relative to available resources, or potential for bias. In light of these problems, a technique was developed to estimate the standardized mortality ratio at a greatly reduced cost. The technique is demonstrated by its application in the investigation of brain cancer among employees of a large chemical company. ^
Resumo:
In regression analysis, covariate measurement error occurs in many applications. The error-prone covariates are often referred to as latent variables. In this proposed study, we extended the study of Chan et al. (2008) on recovering latent slope in a simple regression model to that in a multiple regression model. We presented an approach that applied the Monte Carlo method in the Bayesian framework to the parametric regression model with the measurement error in an explanatory variable. The proposed estimator applied the conditional expectation of latent slope given the observed outcome and surrogate variables in the multiple regression models. A simulation study was presented showing that the method produces estimator that is efficient in the multiple regression model, especially when the measurement error variance of surrogate variable is large.^