923 resultados para radial distribution function


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis is devoted to the study of some stochastic models in inventories. An inventory system is a facility at which items of materials are stocked. In order to promote smooth and efficient running of business, and to provide adequate service to the customers, an inventory materials is essential for any enterprise. When uncertainty is present, inventories are used as a protection against risk of stock out. It is advantageous to procure the item before it is needed at a lower marginal cost. Again, by bulk purchasing, the advantage of price discounts can be availed. All these contribute to the formation of inventory. Maintaining inventories is a major expenditure for any organization. For each inventory, the fundamental question is how much new stock should be ordered and when should the orders are replaced. In the present study, considered several models for single and two commodity stochastic inventory problems. The thesis discusses two models. In the first model, examined the case in which the time elapsed between two consecutive demand points are independent and identically distributed with common distribution function F(.) with mean  (assumed finite) and in which demand magnitude depends only on the time elapsed since the previous demand epoch. The time between disasters has an exponential distribution with parameter . In Model II, the inter arrival time of disasters have general distribution (F.) with mean  ( ) and the quantity destructed depends on the time elapsed between disasters. Demands form compound poison processes with inter arrival times of demands having mean 1/. It deals with linearly correlated bulk demand two Commodity inventory problem, where each arrival demands a random number of items of each commodity C1 and C2, the maximum quantity demanded being a (< S1) and b(

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The average availability of a repairable system is the expected proportion of time that the system is operating in the interval [0, t]. The present article discusses the nonparametric estimation of the average availability when (i) the data on 'n' complete cycles of system operation are available, (ii) the data are subject to right censorship, and (iii) the process is observed upto a specified time 'T'. In each case, a nonparametric confidence interval for the average availability is also constructed. Simulations are conducted to assess the performance of the estimators.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Reliability analysis is a well established branch of statistics that deals with the statistical study of different aspects of lifetimes of a system of components. As we pointed out earlier that major part of the theory and applications in connection with reliability analysis were discussed based on the measures in terms of distribution function. In the beginning chapters of the thesis, we have described some attractive features of quantile functions and the relevance of its use in reliability analysis. Motivated by the works of Parzen (1979), Freimer et al. (1988) and Gilchrist (2000), who indicated the scope of quantile functions in reliability analysis and as a follow up of the systematic study in this connection by Nair and Sankaran (2009), in the present work we tried to extend their ideas to develop necessary theoretical framework for lifetime data analysis. In Chapter 1, we have given the relevance and scope of the study and a brief outline of the work we have carried out. Chapter 2 of this thesis is devoted to the presentation of various concepts and their brief reviews, which were useful for the discussions in the subsequent chapters .In the introduction of Chapter 4, we have pointed out the role of ageing concepts in reliability analysis and in identifying life distributions .In Chapter 6, we have studied the first two L-moments of residual life and their relevance in various applications of reliability analysis. We have shown that the first L-moment of residual function is equivalent to the vitality function, which have been widely discussed in the literature .In Chapter 7, we have defined percentile residual life in reversed time (RPRL) and derived its relationship with reversed hazard rate (RHR). We have discussed the characterization problem of RPRL and demonstrated with an example that the RPRL for given does not determine the distribution uniquely

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Electron transport in a self-consistent potential along a ballistic two-terminal conductor has been investigated. We have derived general formulas which describe the nonlinear current-voltage characteristics, differential conductance, and low-frequency current and voltage noise assuming an arbitrary distribution function and correlation properties of injected electrons. The analytical results have been obtained for a wide range of biases: from equilibrium to high values beyond the linear-response regime. The particular case of a three-dimensional Fermi-Dirac injection has been analyzed. We show that the Coulomb correlations are manifested in the negative excess voltage noise, i.e., the voltage fluctuations under high-field transport conditions can be less than in equilibrium.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We analyze how the spatial localization properties of pairing correlations are changing in a major neutron shell of heavy nuclei. It is shown that the radial distribution of the pairing density depends strongly on whether the chemical potential is close to a low or a high angular momentum level and has little sensitivity to whether the pairing force acts at the surface or in the bulk. The pairing density averaged over one major shell is, however, rather flat, exhibiting little dependence on the pairing force. Hartree-Fock-Bogoliubov calculations for the isotopic chain 100-132Sn are presented for demonstration purposes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we propose a handwritten character recognition system for Malayalam language. The feature extraction phase consists of gradient and curvature calculation and dimensionality reduction using Principal Component Analysis. Directional information from the arc tangent of gradient is used as gradient feature. Strength of gradient in curvature direction is used as the curvature feature. The proposed system uses a combination of gradient and curvature feature in reduced dimension as the feature vector. For classification, discriminative power of Support Vector Machine (SVM) is evaluated. The results reveal that SVM with Radial Basis Function (RBF) kernel yield the best performance with 96.28% and 97.96% of accuracy in two different datasets. This is the highest accuracy ever reported on these datasets

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a Reinforcement Learning (RL) approach to economic dispatch (ED) using Radial Basis Function neural network. We formulate the ED as an N stage decision making problem. We propose a novel architecture to store Qvalues and present a learning algorithm to learn the weights of the neural network. Even though many stochastic search techniques like simulated annealing, genetic algorithm and evolutionary programming have been applied to ED, they require searching for the optimal solution for each load demand. Also they find limitation in handling stochastic cost functions. In our approach once we learn the Q-values, we can find the dispatch for any load demand. We have recently proposed a RL approach to ED. In that approach, we could find only the optimum dispatch for a set of specified discrete values of power demand. The performance of the proposed algorithm is validated by taking IEEE 6 bus system, considering transmission losses

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Results of relativistic (Dirac-Slater and Dirac-Fock) and nonrelativistic (Hartree-Fock-Slater) atomic and molecular calculations have been compared for the group 5 elements Nb, Ta, and Ha and their compounds MCl_5, to elucidate the influence of relativistic effects on their properties especially in going from the 5d element Ta to the 6d element Ha. The analysis of the radial distribution of the valence electrons of the metals for electronic configurations obtained as a result of the molecular calculations and their overlap with ligands show opposite trends in behavior for ns_1/2, np_l/2, and (n -1 )d_5/2 orbitals for Ta and Ha in the relativistic and nonrelativistic cases. Relativistic contraction and energetic stabilization of the ns_1/2 and np_l/2 wave functions and expansion and destabilization of the (n-1)d_5/2 orbitals make hahnium pentahalide more covalent than tantalum pentahalide and increase the bond strength. The nonrelativistic treatment of the wave functions results in an increase in ionicity of the MCl_5 molecules in going from Nb to Ha making element Ha an analog of V. Different trends for the relativistic and nonrelativistic cases are also found for ionization potentials, electronic affinities, and energies of charge-transfer transitions as well as the stability of the maximum oxidation state.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis attempts to quantify the amount of information needed to learn certain tasks. The tasks chosen vary from learning functions in a Sobolev space using radial basis function networks to learning grammars in the principles and parameters framework of modern linguistic theory. These problems are analyzed from the perspective of computational learning theory and certain unifying perspectives emerge.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT&T Bell Labs. This new learning algorithm can be seen as an alternative training technique for Polynomial, Radial Basis Function and Multi-Layer Perceptron classifiers. An interesting property of this approach is that it is an approximate implementation of the Structural Risk Minimization (SRM) induction principle. The derivation of Support Vector Machines, its relationship with SRM, and its geometrical insight, are discussed in this paper. Training a SVM is equivalent to solve a quadratic programming problem with linear and box constraints in a number of variables equal to the number of data points. When the number of data points exceeds few thousands the problem is very challenging, because the quadratic form is completely dense, so the memory needed to store the problem grows with the square of the number of data points. Therefore, training problems arising in some real applications with large data sets are impossible to load into memory, and cannot be solved using standard non-linear constrained optimization algorithms. We present a decomposition algorithm that can be used to train SVM's over large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of, and also establish the stopping criteria for the algorithm. We present previous approaches, as well as results and important details of our implementation of the algorithm using a second-order variant of the Reduced Gradient Method as the solver of the sub-problems. As an application of SVM's, we present preliminary results we obtained applying SVM to the problem of detecting frontal human faces in real images.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Observations in daily practice are sometimes registered as positive values larger then a given threshold α. The sample space is in this case the interval (α,+∞), α > 0, which can be structured as a real Euclidean space in different ways. This fact opens the door to alternative statistical models depending not only on the assumed distribution function, but also on the metric which is considered as appropriate, i.e. the way differences are measured, and thus variability

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper focuses on one of the methods for bandwidth allocation in an ATM network: the convolution approach. The convolution approach permits an accurate study of the system load in statistical terms by accumulated calculations, since probabilistic results of the bandwidth allocation can be obtained. Nevertheless, the convolution approach has a high cost in terms of calculation and storage requirements. This aspect makes real-time calculations difficult, so many authors do not consider this approach. With the aim of reducing the cost we propose to use the multinomial distribution function: the enhanced convolution approach (ECA). This permits direct computation of the associated probabilities of the instantaneous bandwidth requirements and makes a simple deconvolution process possible. The ECA is used in connection acceptance control, and some results are presented

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En este documento se revisa teóricamente la distribución de probabilidad de Poisson como función que asigna a cada suceso definido, sobre una variable aleatoria discreta, la probabilidad de ocurrencia en un intervalo de tiempo o región del espacio disjunto. Adicionalmente se revisa la distribución exponencial negativa empleada para modelar el intervalo de tiempo entre eventos consecutivos de Poisson que ocurren de manera independiente; es decir, en los cuales la probabilidad de ocurrencia de los eventos sucedidos en un intervalo de tiempo no depende de los ocurridos en otros intervalos de tiempo, por esta razón se afirma que es una distribución que no tiene memoria. El proceso de Poisson relaciona la función de Poisson, que representa un conjunto de eventos independientes sucedidos en un intervalo de tiempo o región del espacio con los tiempos dados entre la ocurrencia de los eventos según la distribución exponencial negativa. Los anteriores conceptos se usan en la teoría de colas, rama de la investigación de operaciones que describe y brinda soluciones a situaciones en las que un conjunto de individuos o elementos forman colas en espera de que se les preste un servicio, por lo cual se presentan ejemplos de aplicación en el ámbito médico.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The characteristics of service independence and flexibility of ATM networks make the control problems of such networks very critical. One of the main challenges in ATM networks is to design traffic control mechanisms that enable both economically efficient use of the network resources and desired quality of service to higher layer applications. Window flow control mechanisms of traditional packet switched networks are not well suited to real time services, at the speeds envisaged for the future networks. In this work, the utilisation of the Probability of Congestion (PC) as a bandwidth decision parameter is presented. The validity of PC utilisation is compared with QOS parameters in buffer-less environments when only the cell loss ratio (CLR) parameter is relevant. The convolution algorithm is a good solution for CAC in ATM networks with small buffers. If the source characteristics are known, the actual CLR can be very well estimated. Furthermore, this estimation is always conservative, allowing the retention of the network performance guarantees. Several experiments have been carried out and investigated to explain the deviation between the proposed method and the simulation. Time parameters for burst length and different buffer sizes have been considered. Experiments to confine the limits of the burst length with respect to the buffer size conclude that a minimum buffer size is necessary to achieve adequate cell contention. Note that propagation delay is a no dismiss limit for long distance and interactive communications, then small buffer must be used in order to minimise delay. Under previous premises, the convolution approach is the most accurate method used in bandwidth allocation. This method gives enough accuracy in both homogeneous and heterogeneous networks. But, the convolution approach has a considerable computation cost and a high number of accumulated calculations. To overcome this drawbacks, a new method of evaluation is analysed: the Enhanced Convolution Approach (ECA). In ECA, traffic is grouped in classes of identical parameters. By using the multinomial distribution function instead of the formula-based convolution, a partial state corresponding to each class of traffic is obtained. Finally, the global state probabilities are evaluated by multi-convolution of the partial results. This method avoids accumulated calculations and saves storage requirements, specially in complex scenarios. Sorting is the dominant factor for the formula-based convolution, whereas cost evaluation is the dominant factor for the enhanced convolution. A set of cut-off mechanisms are introduced to reduce the complexity of the ECA evaluation. The ECA also computes the CLR for each j-class of traffic (CLRj), an expression for the CLRj evaluation is also presented. We can conclude that by combining the ECA method with cut-off mechanisms, utilisation of ECA in real-time CAC environments as a single level scheme is always possible.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A novel statistic for local wave amplitude of the 500-hPa geopotential height field is introduced. The statistic uses a Hilbert transform to define a longitudinal wave envelope and dynamical latitude weighting to define the latitudes of interest. Here it is used to detect the existence, or otherwise, of multimodality in its distribution function. The empirical distribution function for the 1960-2000 period is close to a Weibull distribution with shape parameters between 2 and 3. There is substantial interdecadal variability but no apparent local multimodality or bimodality. The zonally averaged wave amplitude, akin to the more usual wave amplitude index, is close to being normally distributed. This is consistent with the central limit theorem, which applies to the construction of the wave amplitude index. For the period 1960-70 it is found that there is apparent bimodality in this index. However, the different amplitudes are realized at different longitudes, so there is no bimodality at any single longitude. As a corollary, it is found that many commonly used statistics to detect multimodality in atmospheric fields potentially satisfy the assumptions underlying the central limit theorem and therefore can only show approximately normal distributions. The author concludes that these techniques may therefore be suboptimal to detect any multimodality.