983 resultados para Discrete models
Resumo:
This article presents important properties of standard discrete distributions and its conjugate densities. The Bernoulli and Poisson processes are described as generators of such discrete models. A characterization of distributions by mixtures is also introduced. This article adopts a novel singular notation and representation. Singular representations are unusual in statistical texts. Nevertheless, the singular notation makes it simpler to extend and generalize theoretical results and greatly facilitates numerical and computational implementation.
Resumo:
Ties among event times are often recorded in survival studies. For example, in a two week laboratory study where event times are measured in days, ties are very likely to occur. The proportional hazards model might be used in this setting using an approximated partial likelihood function. This approximation works well when the number of ties is small. on the other hand, discrete regression models are suggested when the data are heavily tied. However, in many situations it is not clear which approach should be used in practice. In this work, empirical guidelines based on Monte Carlo simulations are provided. These recommendations are based on a measure of the amount of tied data present and the mean square error. An example illustrates the proposed criterion.
Resumo:
Mathematics Subject Classification: 26A33, 45K05, 60J60, 60G50, 65N06, 80-99.
Resumo:
In this study, regression models are evaluated for grouped survival data when the effect of censoring time is considered in the model and the regression structure is modeled through four link functions. The methodology for grouped survival data is based on life tables, and the times are grouped in k intervals so that ties are eliminated. Thus, the data modeling is performed by considering the discrete models of lifetime regression. The model parameters are estimated by using the maximum likelihood and jackknife methods. To detect influential observations in the proposed models, diagnostic measures based on case deletion, which are denominated global influence, and influence measures based on small perturbations in the data or in the model, referred to as local influence, are used. In addition to those measures, the local influence and the total influential estimate are also employed. Various simulation studies are performed and compared to the performance of the four link functions of the regression models for grouped survival data for different parameter settings, sample sizes and numbers of intervals. Finally, a data set is analyzed by using the proposed regression models. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Observations of accelerating seismic activity prior to large earthquakes in natural fault systems have raised hopes for intermediate-term eartquake forecasting. If this phenomena does exist, then what causes it to occur? Recent theoretical work suggests that the accelerating seismic release sequence is a symptom of increasing long-wavelength stress correlation in the fault region. A more traditional explanation, based on Reid's elastic rebound theory, argues that an accelerating sequence of seismic energy release could be a consequence of increasing stress in a fault system whose stress moment release is dominated by large events. Both of these theories are examined using two discrete models of seismicity: a Burridge-Knopoff block-slider model and an elastic continuum based model. Both models display an accelerating release of seismic energy prior to large simulated earthquakes. In both models there is a correlation between the rate of seismic energy release with the total root-mean-squared stress and the level of long-wavelength stress correlation. Furthermore, both models exhibit a systematic increase in the number of large events at high stress and high long-wavelength stress correlation levels. These results suggest that either explanation is plausible for the accelerating moment release in the models examined. A statistical model based on the Burridge-Knopoff block-slider is constructed which indicates that stress alone is sufficient to produce accelerating release of seismic energy with time prior to a large earthquake.
Resumo:
Multiscale modeling is emerging as one of the key challenges in mathematical biology. However, the recent rapid increase in the number of modeling methodologies being used to describe cell populations has raised a number of interesting questions. For example, at the cellular scale, how can the appropriate discrete cell-level model be identified in a given context? Additionally, how can the many phenomenological assumptions used in the derivation of models at the continuum scale be related to individual cell behavior? In order to begin to address such questions, we consider a discrete one-dimensional cell-based model in which cells are assumed to interact via linear springs. From the discrete equations of motion, the continuous Rouse [P. E. Rouse, J. Chem. Phys. 21, 1272 (1953)] model is obtained. This formalism readily allows the definition of a cell number density for which a nonlinear "fast" diffusion equation is derived. Excellent agreement is demonstrated between the continuum and discrete models. Subsequently, via the incorporation of cell division, we demonstrate that the derived nonlinear diffusion model is robust to the inclusion of more realistic biological detail. In the limit of stiff springs, where cells can be considered to be incompressible, we show that cell velocity can be directly related to cell production. This assumption is frequently made in the literature but our derivation places limits on its validity. Finally, the model is compared with a model of a similar form recently derived for a different discrete cell-based model and it is shown how the different diffusion coefficients can be understood in terms of the underlying assumptions about cell behavior in the respective discrete models.
Resumo:
though discrete cell-based frameworks are now commonly used to simulate a whole range of biological phenomena, it is typically not obvious how the numerous different types of model are related to one another, nor which one is most appropriate in a given context. Here we demonstrate how individual cell movement on the discrete scale modeled using nonlinear force laws can be described by nonlinear diffusion coefficients on the continuum scale. A general relationship between nonlinear force laws and their respective diffusion coefficients is derived in one spatial dimension and, subsequently, a range of particular examples is considered. For each case excellent agreement is observed between numerical solutions of the discrete and corresponding continuum models. Three case studies are considered in which we demonstrate how the derived nonlinear diffusion coefficients can be used to (a) relate different discrete models of cell behavior; (b) derive discrete, intercell force laws from previously posed diffusion coefficients, and (c) describe aggregative behavior in discrete simulations.
Resumo:
In this paper, we derive score test statistics to discriminate between proportional hazards and proportional odds models for grouped survival data. These models are embedded within a power family transformation in order to obtain the score tests. In simple cases, some small-sample results are obtained for the score statistics using Monte Carlo simulations. Score statistics have distributions well approximated by the chi-squared distribution. Real examples illustrate the proposed tests.
Resumo:
The discrete models of the Toda and Volterra chains are being constructed out of the continuum two-boson KP hierarchies. The main tool is the discrete symmetry preserving the Hamiltonian structure of the continuum models. The two-boson currents of KP hierarchy are being associated with sites of the corresponding chain by successive actions of discrete symmetry.
Resumo:
In this work, integro-differential reaction-diffusion models are presented for the description of the temporal and spatial evolution of the concentrations of Abeta and tau proteins involved in Alzheimer's disease. Initially, a local model is analysed: this is obtained by coupling with an interaction term two heterodimer models, modified by adding diffusion and Holling functional terms of the second type. We then move on to the presentation of three nonlocal models, which differ according to the type of the growth (exponential, logistic or Gompertzian) considered for healthy proteins. In these models integral terms are introduced to consider the interaction between proteins that are located at different spatial points possibly far apart. For each of the models introduced, the determination of equilibrium points with their stability and a study of the clearance inequalities are carried out. In addition, since the integrals introduced imply a spatial nonlocality in the models exhibited, some general features of nonlocal models are presented. Afterwards, with the aim of developing simulations, it is decided to transfer the nonlocal models to a brain graph called connectome. Therefore, after setting out the construction of such a graph, we move on to the description of Laplacian and convolution operations on a graph. Taking advantage of all these elements, we finally move on to the translation of the continuous models described above into discrete models on the connectome. To conclude, the results of some simulations concerning the discrete models just derived are presented.
Resumo:
A theoretical density-functional study has been carried out to analyze the exchange coupling in the chains of CuGeO3 using discrete models. The results show a good agreement with the experimental exchange coupling constant (J) together with a strong dependence of J with the Cu-O-Cu angle. The calculation of the J values for a distorted model indicates a larger degree of dimerization than those reported previously.
Resumo:
A theoretical density-functional study has been carried out to analyze the exchange coupling in the chains of CuGeO3 using discrete models. The results show a good agreement with the experimental exchange coupling constant (J) together with a strong dependence of J with the Cu-O-Cu angle. The calculation of the J values for a distorted model indicates a larger degree of dimerization than those reported previously.
Resumo:
Les modèles à sur-représentation de zéros discrets et continus ont une large gamme d'applications et leurs propriétés sont bien connues. Bien qu'il existe des travaux portant sur les modèles discrets à sous-représentation de zéro et modifiés à zéro, la formulation usuelle des modèles continus à sur-représentation -- un mélange entre une densité continue et une masse de Dirac -- empêche de les généraliser afin de couvrir le cas de la sous-représentation de zéros. Une formulation alternative des modèles continus à sur-représentation de zéros, pouvant aisément être généralisée au cas de la sous-représentation, est présentée ici. L'estimation est d'abord abordée sous le paradigme classique, et plusieurs méthodes d'obtention des estimateurs du maximum de vraisemblance sont proposées. Le problème de l'estimation ponctuelle est également considéré du point de vue bayésien. Des tests d'hypothèses classiques et bayésiens visant à déterminer si des données sont à sur- ou sous-représentation de zéros sont présentées. Les méthodes d'estimation et de tests sont aussi évaluées au moyen d'études de simulation et appliquées à des données de précipitation agrégées. Les diverses méthodes s'accordent sur la sous-représentation de zéros des données, démontrant la pertinence du modèle proposé. Nous considérons ensuite la classification d'échantillons de données à sous-représentation de zéros. De telles données étant fortement non normales, il est possible de croire que les méthodes courantes de détermination du nombre de grappes s'avèrent peu performantes. Nous affirmons que la classification bayésienne, basée sur la distribution marginale des observations, tiendrait compte des particularités du modèle, ce qui se traduirait par une meilleure performance. Plusieurs méthodes de classification sont comparées au moyen d'une étude de simulation, et la méthode proposée est appliquée à des données de précipitation agrégées provenant de 28 stations de mesure en Colombie-Britannique.