995 resultados para Geometric vitality function


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study on the characterization of probability distributions using the residual entropy function. The concept of entropy is extensively used in literature as a quantitative measure of uncertainty associated with a random phenomenon. The commonly used life time models in reliability Theory are exponential distribution, Pareto distribution, Beta distribution, Weibull distribution and gamma distribution. Several characterization theorems are obtained for the above models using reliability concepts such as failure rate, mean residual life function, vitality function, variance residual life function etc. Most of the works on characterization of distributions in the reliability context centers around the failure rate or the residual life function. The important aspect of interest in the study of entropy is that of locating distributions for which the shannon’s entropy is maximum subject to certain restrictions on the underlying random variable. The geometric vitality function and examine its properties. It is established that the geometric vitality function determines the distribution uniquely. The problem of averaging the residual entropy function is examined, and also the truncated form version of entropies of higher order are defined. In this study it is established that the residual entropy function determines the distribution uniquely and that the constancy of the same is characteristics to the geometric distribution

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional approaches to calculate total factor productivity change through Malmquist indexes rely on distance functions. In this paper we show that the use of distance functions as a means to calculate total factor productivity change may introduce some bias in the analysis, and therefore we propose a procedure that calculates total factor productivity change through observed values only. Our total factor productivity change is then decomposed into efficiency change, technological change, and a residual effect. This decomposition makes use of a non-oriented measure in order to avoid problems associated with the traditional use of radial oriented measures, especially when variable returns to scale technologies are to be compared.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional approaches to calculate total factor productivity (TFP) change through Malmquist indexes rely on distance functions. In this paper we show that the use of distance functions as a means to calculate TFP change may introduce some bias in the analysis, and therefore we propose a procedure that calculates TFP change through observed values only. Our total TFP change is then decomposed into efficiency change, technological change, and a residual effect. This decomposition makes use of a non-oriented measure in order to avoid problems associated with the traditional use of radial oriented measures, especially when variable returns to scale technologies are to be compared. The proposed approach is applied in this paper to a sample of Portuguese bank branches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study focuses attention on defining certain measures of income inequality for the truncated distributions and characterization of probability distributions using the functional form of these measures, extension of some measures of inequality and stability to higher dimensions, characterization of bivariate models using the above concepts and estimation of some measures of inequality using the Bayesian techniques. The thesis defines certain measures of income inequality for the truncated distributions and studies the effect of truncation upon these measures. An important measure used in Reliability theory, to measure the stability of the component is the residual entropy function. This concept can advantageously used as a measure of inequality of truncated distributions. The geometric mean comes up as handy tool in the measurement of income inequality. The geometric vitality function being the geometric mean of the truncated random variable can be advantageously utilized to measure inequality of the truncated distributions. The study includes problem of estimation of the Lorenz curve, Gini-index and variance of logarithms for the Pareto distribution using Bayesian techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this article, we study reliability measures such as geometric vitality function and conditional Shannon’s measures of uncertainty proposed by Ebrahimi (1996) and Sankaran and Gupta (1999), respectively, for the doubly (interval) truncated random variables. In survival analysis and reliability engineering, these measures play a significant role in studying the various characteristics of a system/component when it fails between two time points. The interrelationships among these uncertainty measures for various distributions are derived and proved characterization theorems arising out of them

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Reliability analysis is a well established branch of statistics that deals with the statistical study of different aspects of lifetimes of a system of components. As we pointed out earlier that major part of the theory and applications in connection with reliability analysis were discussed based on the measures in terms of distribution function. In the beginning chapters of the thesis, we have described some attractive features of quantile functions and the relevance of its use in reliability analysis. Motivated by the works of Parzen (1979), Freimer et al. (1988) and Gilchrist (2000), who indicated the scope of quantile functions in reliability analysis and as a follow up of the systematic study in this connection by Nair and Sankaran (2009), in the present work we tried to extend their ideas to develop necessary theoretical framework for lifetime data analysis. In Chapter 1, we have given the relevance and scope of the study and a brief outline of the work we have carried out. Chapter 2 of this thesis is devoted to the presentation of various concepts and their brief reviews, which were useful for the discussions in the subsequent chapters .In the introduction of Chapter 4, we have pointed out the role of ageing concepts in reliability analysis and in identifying life distributions .In Chapter 6, we have studied the first two L-moments of residual life and their relevance in various applications of reliability analysis. We have shown that the first L-moment of residual function is equivalent to the vitality function, which have been widely discussed in the literature .In Chapter 7, we have defined percentile residual life in reversed time (RPRL) and derived its relationship with reversed hazard rate (RHR). We have discussed the characterization problem of RPRL and demonstrated with an example that the RPRL for given does not determine the distribution uniquely

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A family of measurements of generalisation is proposed for estimators of continuous distributions. In particular, they apply to neural network learning rules associated with continuous neural networks. The optimal estimators (learning rules) in this sense are Bayesian decision methods with information divergence as loss function. The Bayesian framework guarantees internal coherence of such measurements, while the information geometric loss function guarantees invariance. The theoretical solution for the optimal estimator is derived by a variational method. It is applied to the family of Gaussian distributions and the implications are discussed. This is one in a series of technical reports on this topic; it generalises the results of ¸iteZhu95:prob.discrete to continuous distributions and serve as a concrete example of a larger picture ¸iteZhu95:generalisation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In for-profit organizations efficiency measurement with reference to the potential for profit augmentation is particularly important as is its decomposition into technical, and allocative components. Different profit efficiency approaches can be found in the literature to measure and decompose overall profit efficiency. In this paper, we highlight some problems within existing approaches and propose a new measure of profit efficiency based on a geometric mean of input/output adjustments needed for maximizing profits. Overall profit efficiency is calculated through this efficiency measure and is decomposed into its technical and allocative components. Technical efficiency is calculated based on a non-oriented geometric distance function (GDF) that is able to incorporate all the sources of inefficiency, while allocative efficiency is retrieved residually. We also define a measure of profitability efficiency which complements profit efficiency in that it makes it possible to retrieve the scale efficiency of a unit as a component of its profitability efficiency. In addition, the measure of profitability efficiency allows for a dual profitability interpretation of the GDF measure of technical efficiency. The concepts introduced in the paper are illustrated using a numerical example.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper we describe and evaluate a geometric mass-preserving redistancing procedure for the level set function on general structured grids. The proposed algorithm is adapted from a recent finite element-based method and preserves the mass by means of a localized mass correction. A salient feature of the scheme is the absence of adjustable parameters. The algorithm is tested in two and three spatial dimensions and compared with the widely used partial differential equation (PDE)-based redistancing method using structured Cartesian grids. Through the use of quantitative error measures of interest in level set methods, we show that the overall performance of the proposed geometric procedure is better than PDE-based reinitialization schemes, since it is more robust with comparable accuracy. We also show that the algorithm is well-suited for the highly stretched curvilinear grids used in CFD simulations. Copyright (C) 2010 John Wiley & Sons, Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper describes an investigation into the use of parametric 2D models describing the movement of edges for the determination of possible 3D shape and hence function of an object. An assumption of this research is that the camera can foveate and track particular features. It is argued that simple 2D analytic descriptions of the movement of edges can infer 3D shape while the camera is moved. This uses an advantage of foveation i.e. the problem becomes object centred. The problem of correspondence for numerous edge points is overcome by the use of a tree based representation for the competing hypotheses. Numerous hypothesis are maintained simultaneously and it does not rely on a single kinematic model which assumes constant velocity or acceleration. The numerous advantages of this strategy are described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The current state of the practice in Blackspot Identification (BSI) utilizes safety performance functions based on total crash counts to identify transport system sites with potentially high crash risk. This paper postulates that total crash count variation over a transport network is a result of multiple distinct crash generating processes including geometric characteristics of the road, spatial features of the surrounding environment, and driver behaviour factors. However, these multiple sources are ignored in current modelling methodologies in both trying to explain or predict crash frequencies across sites. Instead, current practice employs models that imply that a single underlying crash generating process exists. The model mis-specification may lead to correlating crashes with the incorrect sources of contributing factors (e.g. concluding a crash is predominately caused by a geometric feature when it is a behavioural issue), which may ultimately lead to inefficient use of public funds and misidentification of true blackspots. This study aims to propose a latent class model consistent with a multiple crash process theory, and to investigate the influence this model has on correctly identifying crash blackspots. We first present the theoretical and corresponding methodological approach in which a Bayesian Latent Class (BLC) model is estimated assuming that crashes arise from two distinct risk generating processes including engineering and unobserved spatial factors. The Bayesian model is used to incorporate prior information about the contribution of each underlying process to the total crash count. The methodology is applied to the state-controlled roads in Queensland, Australia and the results are compared to an Empirical Bayesian Negative Binomial (EB-NB) model. A comparison of goodness of fit measures illustrates significantly improved performance of the proposed model compared to the NB model. The detection of blackspots was also improved when compared to the EB-NB model. In addition, modelling crashes as the result of two fundamentally separate underlying processes reveals more detailed information about unobserved crash causes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The natural frequencies of continuous systems depend on the governing partial differential equation and can be numerically estimated using the finite element method. The accuracy and convergence of the finite element method depends on the choice of basis functions. A basis function will generally perform better if it is closely linked to the problem physics. The stiffness matrix is the same for either static or dynamic loading, hence the basis function can be chosen such that it satisfies the static part of the governing differential equation. However, in the case of a rotating beam, an exact closed form solution for the static part of the governing differential equation is not known. In this paper, we try to find an approximate solution for the static part of the governing differential equation for an uniform rotating beam. The error resulting from the approximation is minimized to generate relations between the constants assumed in the solution. This new function is used as a basis function which gives rise to shape functions which depend on position of the element in the beam, material, geometric properties and rotational speed of the beam. The results of finite element analysis with the new basis functions are verified with published literature for uniform and tapered rotating beams under different boundary conditions. Numerical results clearly show the advantage of the current approach at high rotation speeds with a reduction of 10 to 33% in the degrees of freedom required for convergence of the first five modes to four decimal places for an uniform rotating cantilever beam.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless sensor networks can often be viewed in terms of a uniform deployment of a large number of nodes in a region of Euclidean space. Following deployment, the nodes self-organize into a mesh topology with a key aspect being self-localization. Having obtained a mesh topology in a dense, homogeneous deployment, a frequently used approximation is to take the hop distance between nodes to be proportional to the Euclidean distance between them. In this work, we analyze this approximation through two complementary analyses. We assume that the mesh topology is a random geometric graph on the nodes; and that some nodes are designated as anchors with known locations. First, we obtain high probability bounds on the Euclidean distances of all nodes that are h hops away from a fixed anchor node. In the second analysis, we provide a heuristic argument that leads to a direct approximation for the density function of the Euclidean distance between two nodes that are separated by a hop distance h. This approximation is shown, through simulation, to very closely match the true density function. Localization algorithms that draw upon the preceding analyses are then proposed and shown to perform better than some of the well-known algorithms present in the literature. Belief-propagation-based message-passing is then used to further enhance the performance of the proposed localization algorithms. To our knowledge, this is the first usage of message-passing for hop-count-based self-localization.