969 resultados para Probability distribution functions


Relevância:

40.00% 40.00%

Publicador:

Resumo:

A five-parameter distribution so-called the beta modified Weibull distribution is defined and studied. The new distribution contains, as special submodels, several important distributions discussed in the literature, such as the generalized modified Weibull, beta Weibull, exponentiated Weibull, beta exponential, modified Weibull and Weibull distributions, among others. The new distribution can be used effectively in the analysis of survival data since it accommodates monotone, unimodal and bathtub-shaped hazard functions. We derive the moments and examine the order statistics and their moments. We propose the method of maximum likelihood for estimating the model parameters and obtain the observed information matrix. A real data set is used to illustrate the importance and flexibility of the new distribution.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Estimation of Taylor`s power law for species abundance data may be performed by linear regression of the log empirical variances on the log means, but this method suffers from a problem of bias for sparse data. We show that the bias may be reduced by using a bias-corrected Pearson estimating function. Furthermore, we investigate a more general regression model allowing for site-specific covariates. This method may be efficiently implemented using a Newton scoring algorithm, with standard errors calculated from the inverse Godambe information matrix. The method is applied to a set of biomass data for benthic macrofauna from two Danish estuaries. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A four-parameter extension of the generalized gamma distribution capable of modelling a bathtub-shaped hazard rate function is defined and studied. The beauty and importance of this distribution lies in its ability to model monotone and non-monotone failure rate functions, which are quite common in lifetime data analysis and reliability. The new distribution has a number of well-known lifetime special sub-models, such as the exponentiated Weibull, exponentiated generalized half-normal, exponentiated gamma and generalized Rayleigh, among others. We derive two infinite sum representations for its moments. We calculate the density of the order statistics and two expansions for their moments. The method of maximum likelihood is used for estimating the model parameters and the observed information matrix is obtained. Finally, a real data set from the medical area is analysed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We introduce the log-beta Weibull regression model based on the beta Weibull distribution (Famoye et al., 2005; Lee et al., 2007). We derive expansions for the moment generating function which do not depend on complicated functions. The new regression model represents a parametric family of models that includes as sub-models several widely known regression models that can be applied to censored survival data. We employ a frequentist analysis, a jackknife estimator, and a parametric bootstrap for the parameters of the proposed model. We derive the appropriate matrices for assessing local influences on the parameter estimates under different perturbation schemes and present some ways to assess global influences. Further, for different parameter settings, sample sizes, and censoring percentages, several simulations are performed. In addition, the empirical distribution of some modified residuals are displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be extended to a modified deviance residual in the proposed regression model applied to censored data. We define martingale and deviance residuals to evaluate the model assumptions. The extended regression model is very useful for the analysis of real data and could give more realistic fits than other special regression models.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The convection-dispersion model and its extended form have been used to describe solute disposition in organs and to predict hepatic availabilities. A range of empirical transit-time density functions has also been used for a similar purpose. The use of the dispersion model with mixed boundary conditions and transit-time density functions has been queried recently by Hisaka and Sugiyanaa in this journal. We suggest that, consistent with soil science and chemical engineering literature, the mixed boundary conditions are appropriate providing concentrations are defined in terms of flux to ensure continuity at the boundaries and mass balance. It is suggested that the use of the inverse Gaussian or other functions as empirical transit-time densities is independent of any boundary condition consideration. The mixed boundary condition solutions of the convection-dispersion model are the easiest to use when linear kinetics applies. In contrast, the closed conditions are easier to apply in a numerical analysis of nonlinear disposition of solutes in organs. We therefore argue that the use of hepatic elimination models should be based on pragmatic considerations, giving emphasis to using the simplest or easiest solution that will give a sufficiently accurate prediction of hepatic pharmacokinetics for a particular application. (C) 2000 Wiley-Liss Inc. and the American Pharmaceutical Association J Pharm Sci 89:1579-1586, 2000.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A two-component survival mixture model is proposed to analyse a set of ischaemic stroke-specific mortality data. The survival experience of stroke patients after index stroke may be described by a subpopulation of patients in the acute condition and another subpopulation of patients in the chronic phase. To adjust for the inherent correlation of observations due to random hospital effects, a mixture model of two survival functions with random effects is formulated. Assuming a Weibull hazard in both components, an EM algorithm is developed for the estimation of fixed effect parameters and variance components. A simulation study is conducted to assess the performance of the two-component survival mixture model estimators. Simulation results confirm the applicability of the proposed model in a small sample setting. Copyright (C) 2004 John Wiley Sons, Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a methodology that aims to increase the probability of delivering power to any load point of the electrical distribution system by identifying new investments in distribution components. The methodology is based on statistical failure and repair data of the distribution power system components and it uses fuzzy-probabilistic modelling for system component outage parameters. Fuzzy membership functions of system component outage parameters are obtained by statistical records. A mixed integer non-linear optimization technique is developed to identify adequate investments in distribution networks components that allow increasing the availability level for any customer in the distribution system at minimum cost for the system operator. To illustrate the application of the proposed methodology, the paper includes a case study that considers a real distribution network.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The paper proposes a methodology to increase the probability of delivering power to any load point by identifying new investments in distribution energy systems. The proposed methodology is based on statistical failure and repair data of distribution components and it uses a fuzzy-probabilistic modeling for the components outage parameters. The fuzzy membership functions of the outage parameters of each component are based on statistical records. A mixed integer nonlinear programming optimization model is developed in order to identify the adequate investments in distribution energy system components which allow increasing the probability of delivering power to any customer in the distribution system at the minimum possible cost for the system operator. To illustrate the application of the proposed methodology, the paper includes a case study that considers a 180 bus distribution network.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper proposes a methodology to increase the probability of delivering power to any load point through the identification of new investments. The methodology uses a fuzzy set approach to model the uncertainty of outage parameters, load and generation. A DC fuzzy multicriteria optimization model considering the Pareto front and based on mixed integer non-linear optimization programming is developed in order to identify the adequate investments in distribution networks components which allow increasing the probability of delivering power to all customers in the distribution network at the minimum possible cost for the system operator, while minimizing the non supplied energy cost. To illustrate the application of the proposed methodology, the paper includes a case study which considers an 33 bus distribution network.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A methodology to increase the probability of delivering power to any load point through the identification of new investments in distribution network components is proposed in this paper. The method minimizes the investment cost as well as the cost of energy not supplied in the network. A DC optimization model based on mixed integer non-linear programming is developed considering the Pareto front technique in order to identify the adequate investments in distribution networks components which allow increasing the probability of delivering power for any customer in the distribution system at the minimum possible cost for the system operator, while minimizing the energy not supplied cost. Thus, a multi-objective problem is formulated. To illustrate the application of the proposed methodology, the paper includes a case study which considers a 180 bus distribution network

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Aitchison vector space structure for the simplex is generalized to a Hilbert space structure A2(P) for distributions and likelihoods on arbitrary spaces. Centralnotations of statistics, such as Information or Likelihood, can be identified in the algebraical structure of A2(P) and their corresponding notions in compositional data analysis, such as Aitchison distance or centered log ratio transform.In this way very elaborated aspects of mathematical statistics can be understoodeasily in the light of a simple vector space structure and of compositional data analysis. E.g. combination of statistical information such as Bayesian updating,combination of likelihood and robust M-estimation functions are simple additions/perturbations in A2(Pprior). Weighting observations corresponds to a weightedaddition of the corresponding evidence.Likelihood based statistics for general exponential families turns out to have aparticularly easy interpretation in terms of A2(P). Regular exponential families formfinite dimensional linear subspaces of A2(P) and they correspond to finite dimensionalsubspaces formed by their posterior in the dual information space A2(Pprior).The Aitchison norm can identified with mean Fisher information. The closing constant itself is identified with a generalization of the cummulant function and shown to be Kullback Leiblers directed information. Fisher information is the local geometry of the manifold induced by the A2(P) derivative of the Kullback Leibler information and the space A2(P) can therefore be seen as the tangential geometry of statistical inference at the distribution P.The discussion of A2(P) valued random variables, such as estimation functionsor likelihoods, give a further interpretation of Fisher information as the expected squared norm of evidence and a scale free understanding of unbiased reasoning

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The influence of hole-hole (h-h) propagation in addition to the conventional particle-particle (p-p) propagation, on the energy per particle and the momentum distribution is investigated for the v2 central interaction which is derived from Reid¿s soft-core potential. The results are compared to Brueckner-Hartree-Fock calculations with a continuous choice for the single-particle (SP) spectrum. Calculation of the energy from a self-consistently determined SP spectrum leads to a lower saturation density. This result is not corroborated by calculating the energy from the hole spectral function, which is, however, not self-consistent. A generalization of previous calculations of the momentum distribution, based on a Goldstone diagram expansion, is introduced that allows the inclusion of h-h contributions to all orders. From this result an alternative calculation of the kinetic energy is obtained. In addition, a direct calculation of the potential energy is presented which is obtained from a solution of the ladder equation containing p-p and h-h propagation to all orders. These results can be considered as the contributions of selected Goldstone diagrams (including p-p and h-h terms on the same footing) to the kinetic and potential energy in which the SP energy is given by the quasiparticle energy. The results for the summation of Goldstone diagrams leads to a different momentum distribution than the one obtained from integrating the hole spectral function which in general gives less depletion of the Fermi sea. Various arguments, based partly on the results that are obtained, are put forward that a self-consistent determination of the spectral functions including the p-p and h-h ladder contributions (using a realistic interaction) will shed light on the question of nuclear saturation at a nonrelativistic level that is consistent with the observed depletion of SP orbitals in finite nuclei.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The speed of traveling fronts for a two-dimensional model of a delayed reactiondispersal process is derived analytically and from simulations of molecular dynamics. We show that the one-dimensional (1D) and two-dimensional (2D) versions of a given kernel do not yield always the same speed. It is also shown that the speeds of time-delayed fronts may be higher than those predicted by the corresponding non-delayed models. This result is shown for systems with peaked dispersal kernels which lead to ballistic transport

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Electricity distribution network operation (NO) models are challenged as they are expected to continue to undergo changes during the coming decades in the fairly developed and regulated Nordic electricity market. Network asset managers are to adapt to competitive technoeconomical business models regarding the operation of increasingly intelligent distribution networks. Factors driving the changes for new business models within network operation include: increased investments in distributed automation (DA), regulative frameworks for annual profit limits and quality through outage cost, increasing end-customer demands, climatic changes and increasing use of data system tools, such as Distribution Management System (DMS). The doctoral thesis addresses the questions a) whether there exist conditions and qualifications for competitive markets within electricity distribution network operation and b) if so, identification of limitations and required business mechanisms. This doctoral thesis aims to provide an analytical business framework, primarily for electric utilities, for evaluation and development purposes of dedicated network operation models to meet future market dynamics within network operation. In the thesis, the generic build-up of a business model has been addressed through the use of the strategicbusiness hierarchy levels of mission, vision and strategy for definition of the strategic direction of the business followed by the planning, management and process execution levels of enterprisestrategy execution. Research questions within electricity distribution network operation are addressed at the specified hierarchy levels. The results of the research represent interdisciplinary findings in the areas of electrical engineering and production economics. The main scientific contributions include further development of the extended transaction cost economics (TCE) for government decisions within electricity networks and validation of the usability of the methodology for the electricity distribution industry. Moreover, DMS benefit evaluations in the thesis based on the outage cost calculations propose theoretical maximum benefits of DMS applications equalling roughly 25% of the annual outage costs and 10% of the respective operative costs in the case electric utility. Hence, the annual measurable theoretical benefits from the use of DMS applications are considerable. The theoretical results in the thesis are generally validated by surveys and questionnaires.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Le sujet principal de cette thèse est la distribution des nombres premiers dans les progressions arithmétiques, c'est-à-dire des nombres premiers de la forme $qn+a$, avec $a$ et $q$ des entiers fixés et $n=1,2,3,\dots$ La thèse porte aussi sur la comparaison de différentes suites arithmétiques par rapport à leur comportement dans les progressions arithmétiques. Elle est divisée en quatre chapitres et contient trois articles. Le premier chapitre est une invitation à la théorie analytique des nombres, suivie d'une revue des outils qui seront utilisés plus tard. Cette introduction comporte aussi certains résultats de recherche, que nous avons cru bon d'inclure au fil du texte. Le deuxième chapitre contient l'article \emph{Inequities in the Shanks-Rényi prime number race: an asymptotic formula for the densities}, qui est le fruit de recherche conjointe avec le professeur Greg Martin. Le but de cet article est d'étudier un phénomène appelé le <>, qui s'observe dans les <>. Chebyshev a observé qu'il semble y avoir plus de premiers de la forme $4n+3$ que de la forme $4n+1$. De manière plus générale, Rubinstein et Sarnak ont montré l'existence d'une quantité $\delta(q;a,b)$, qui désigne la probabilité d'avoir plus de premiers de la forme $qn+a$ que de la forme $qn+b$. Dans cet article nous prouvons une formule asymptotique pour $\delta(q;a,b)$ qui peut être d'un ordre de précision arbitraire (en terme de puissance négative de $q$). Nous présentons aussi des résultats numériques qui supportent nos formules. Le troisième chapitre contient l'article \emph{Residue classes containing an unexpected number of primes}. Le but est de fixer un entier $a\neq 0$ et ensuite d'étudier la répartition des premiers de la forme $qn+a$, en moyenne sur $q$. Nous montrons que l'entier $a$ fixé au départ a une grande influence sur cette répartition, et qu'il existe en fait certaines progressions arithmétiques contenant moins de premiers que d'autres. Ce phénomène est plutôt surprenant, compte tenu du théorème des premiers dans les progressions arithmétiques qui stipule que les premiers sont équidistribués dans les classes d'équivalence $\bmod q$. Le quatrième chapitre contient l'article \emph{The influence of the first term of an arithmetic progression}. Dans cet article on s'intéresse à des irrégularités similaires à celles observées au troisième chapitre, mais pour des suites arithmétiques plus générales. En effet, nous étudions des suites telles que les entiers s'exprimant comme la somme de deux carrés, les valeurs d'une forme quadratique binaire, les $k$-tuplets de premiers et les entiers sans petit facteur premier. Nous démontrons que dans chacun de ces exemples, ainsi que dans une grande classe de suites arithmétiques, il existe des irrégularités dans les progressions arithmétiques $a\bmod q$, avec $a$ fixé et en moyenne sur $q$.