1000 resultados para BERKSON-GAGE MODEL
Resumo:
Many recent survival studies propose modeling data with a cure fraction, i.e., data in which part of the population is not susceptible to the event of interest. This event may occur more than once for the same individual (recurrent event). We then have a scenario of recurrent event data in the presence of a cure fraction, which may appear in various areas such as oncology, finance, industries, among others. This paper proposes a multiple time scale survival model to analyze recurrent events using a cure fraction. The objective is analyzing the efficiency of certain interventions so that the studied event will not happen again in terms of covariates and censoring. All estimates were obtained using a sampling-based approach, which allows information to be input beforehand with lower computational effort. Simulations were done based on a clinical scenario in order to observe some frequentist properties of the estimation procedure in the presence of small and moderate sample sizes. An application of a well-known set of real mammary tumor data is provided.
Resumo:
"April 1972."
Resumo:
"January 1975."
Resumo:
"February 1985."
Resumo:
"12 November 1980"--Change no. 3.
Resumo:
Includes index.
Resumo:
Includes index.
Resumo:
Includes index.
Resumo:
"September 1965."
Resumo:
Estimating rare events from zero-heavy data (data with many zero values) is a common challenge in fisheries science and ecology. For example, loggerhead sea turtles (Caretta caretta) and leatherback sea turtles (Dermochelys coriacea) account for less than 1% of total catch in the U.S. Atlantic pelagic longline fishery. Nevertheless, the Southeast Fisheries Science Center (SEFSC) of the National Marine Fisheries Service (NMFS) is charged with assessing the effect of this fishery on these federally protected species. Annual estimates of loggerhead and leatherback bycatch in a fishery can affect fishery management and species conservation decisions. However, current estimates have wide confidence intervals, and their accuracy is unknown. We evaluate 3 estimation methods, each at 2 spatiotemporal scales, in simulations of 5 spatial scenarios representing incidental capture of sea turtles by the U.S. Atlantic pelagic longline fishery. The delta-log normal method of estimating bycatch for calendar quarter and fishing area strata was the least biased estimation method in the spatial scenarios believed to be most realistic. This result supports the current estimation procedure used by the SEFSC.
Resumo:
Culture of a non-native species, such as the Suminoe oyster (Crassostrea ariakensis), could offset the harvest of the declining native eastern oyster (Crassostrea virginica) fishery in Chesapeake Bay. Because of possible ecological impacts from introducing a fertile non-native species, introduction of sterile triploid oysters has been proposed. However, recent data show that a small percentage of triploid individuals progressively revert toward diploidy, introducing the possibility that Suminoe oysters might establish self-sustaining populations. To assess the risk of Suminoe oyster populations becoming established in Chesapeake Bay, a demographic population model was developed. Parameters modeled were salinity, stocking density, reversion rate, reproductive potential, natural and harvest-induced mortality, growth rates, and effects of various management strategies, including harvest strategies. The probability of a Suminoe oyster population becoming self-sustaining decreased in the model when oysters are grown at low salinity sites, certainty of harvest is high, mini-mum shell length-at-harvest is small, and stocking density is low. From the results of the model, we suggest adopting the proposed management strategies shown by the model to decrease the probability of a Suminoe oyster population becoming self-sustaining. Policy makers and fishery managers can use the model to predict potential outcomes of policy decisions, supporting the ability to make science-based policy decisions about the proposed introduction of triploid Suminoe oysters into the Chesapeake Bay.
Resumo:
The problem of using information available from one variable X to make inferenceabout another Y is classical in many physical and social sciences. In statistics this isoften done via regression analysis where mean response is used to model the data. Onestipulates the model Y = µ(X) +ɛ. Here µ(X) is the mean response at the predictor variable value X = x, and ɛ = Y - µ(X) is the error. In classical regression analysis, both (X; Y ) are observable and one then proceeds to make inference about the mean response function µ(X). In practice there are numerous examples where X is not available, but a variable Z is observed which provides an estimate of X. As an example, consider the herbicidestudy of Rudemo, et al. [3] in which a nominal measured amount Z of herbicide was applied to a plant but the actual amount absorbed by the plant X is unobservable. As another example, from Wang [5], an epidemiologist studies the severity of a lung disease, Y , among the residents in a city in relation to the amount of certain air pollutants. The amount of the air pollutants Z can be measured at certain observation stations in the city, but the actual exposure of the residents to the pollutants, X, is unobservable and may vary randomly from the Z-values. In both cases X = Z+error: This is the so called Berkson measurement error model.In more classical measurement error model one observes an unbiased estimator W of X and stipulates the relation W = X + error: An example of this model occurs when assessing effect of nutrition X on a disease. Measuring nutrition intake precisely within 24 hours is almost impossible. There are many similar examples in agricultural or medical studies, see e.g., Carroll, Ruppert and Stefanski [1] and Fuller [2], , among others. In this talk we shall address the question of fitting a parametric model to the re-gression function µ(X) in the Berkson measurement error model: Y = µ(X) + ɛ; X = Z + η; where η and ɛ are random errors with E(ɛ) = 0, X and η are d-dimensional, and Z is the observable d-dimensional r.v.
Resumo:
In this paper we extend the long-term survival model proposed by Chen et al. [Chen, M.-H., Ibrahim, J.G., Sinha, D., 1999. A new Bayesian model for survival data with a surviving fraction. journal of the American Statistical Association 94, 909-919] via the generating function of a real sequence introduced by Feller [Feller, W., 1968. An Introduction to Probability Theory and its Applications, third ed., vol. 1, Wiley, New York]. A direct consequence of this new formulation is the unification of the long-term survival models proposed by Berkson and Gage [Berkson, J., Gage, R.P., 1952. Survival cure for cancer patients following treatment. journal of the American Statistical Association 47, 501-515] and Chen et al. (see citation above). Also, we show that the long-term survival function formulated in this paper satisfies the proportional hazards property if, and only if, the number of competing causes related to the occurrence of an event of interest follows a Poisson distribution. Furthermore, a more flexible model than the one proposed by Yin and Ibrahim [Yin, G., Ibrahim, J.G., 2005. Cure rate models: A unified approach. The Canadian journal of Statistics 33, 559-570] is introduced and, motivated by Feller`s results, a very useful competing index is defined. (c) 2008 Elsevier B.V. All rights reserved.