47 resultados para distribution (probability theory)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The metal-insulator (or amorphous semiconductor) blocking contact is still not well understood. In the present paper, we discuss the non steady state characteristics of Metal-lnsulator-Metal Structure with non-intimate blocking contacts (i.e. Metal-Oxide-Insulator-Metal Structure). We consider a uniform distribution (in energy) of impurity states in addition to impurity states at a single energy level within the depletion region. We discuss thermal as well as isothermal characteristics and present expressions for the temperature of maximum current (T-m) and a method to calculate the density of uniformly distributed impurity states. The variation of mobility with electrical field has also been considered. Finally we plot the theoretical curves under different conditions. The present results are closing into available experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Eurytrema sp. egg counts (epg) in the feces of naturally infected cattle were performed and the technique employed showed 94.2% probability of detecting positive cases of the infection with a single examination independently of the host parasite burden. It was also demonstrated that the epg of Eurytrema sp. follows a negative binomial distribution model and is characterized by its small magnitude.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A body of research has developed within the context of nonlinear signal and image processing that deals with the automatic, statistical design of digital window-based filters. Based on pairs of ideal and observed signals, a filter is designed in an effort to minimize the error between the ideal and filtered signals. The goodness of an optimal filter depends on the relation between the ideal and observed signals, but the goodness of a designed filter also depends on the amount of sample data from which it is designed. In order to lessen the design cost, a filter is often chosen from a given class of filters, thereby constraining the optimization and increasing the error of the optimal filter. To a great extent, the problem of filter design concerns striking the correct balance between the degree of constraint and the design cost. From a different perspective and in a different context, the problem of constraint versus sample size has been a major focus of study within the theory of pattern recognition. This paper discusses the design problem for nonlinear signal processing, shows how the issue naturally transitions into pattern recognition, and then provides a review of salient related pattern-recognition theory. In particular, it discusses classification rules, constrained classification, the Vapnik-Chervonenkis theory, and implications of that theory for morphological classifiers and neural networks. The paper closes by discussing some design approaches developed for nonlinear signal processing, and how the nature of these naturally lead to a decomposition of the error of a designed filter into a sum of the following components: the Bayes error of the unconstrained optimal filter, the cost of constraint, the cost of reducing complexity by compressing the original signal distribution, the design cost, and the contribution of prior knowledge to a decrease in the error. The main purpose of the paper is to present fundamental principles of pattern recognition theory within the framework of active research in nonlinear signal processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The oxidative and thermo-mechanical degradation of HDPE was studied during processing in an internal mixer under two conditions: totally and partially filled chambers, which provides lower and higher concentrations of oxygen, respectively. Two types of HDPEs, Phillips and Ziegler-Natta, having different levels of terminal vinyl unsaturations were analyzed. Materials were processed at 160, 200, and 240 degrees C. Standard rheograrns using a partially filled chamber showed that the torque is much more unstable in comparison to a totally filled chamber which provides an environment depleted of oxygen. Carbonyl and transvinylene group concentrations increased, whereas vinyl group concentration decreased with temperature and oxygen availability. Average number of chain scission and branching (n(s)) was calculated from MWD curves and its plotting versus functional groups' concentration showed that chain scission or branching takes place depending upon oxygen content and vinyl groups' consumption. Chain scission and branching distribution function (CSBDF) values showed that longer chains undergo chain scission easier than shorter ones due to their higher probability of entanglements. This yields macroradicals that react with the vinyl terminal unsaturations of other chains producing chain branching. Shorter chains are more mobile, not suffering scission but instead are used for grafting the macroradicals, increasing the molecular weight. Increase in the oxygen concentration, temperature, and vinyl end groups' content facilitates the thermo-mechanical degradation reducing the amount of both, longer chains via chain scission and shorter chains via chain branching, narrowing the polydispersity. Phillips HDPE produces a higher level of chain branching than the Ziegler-Natta's type at the same processing condition. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the statistical distribution of firm size for USA and Brazilian publicly traded firms through the Zipf plot technique. Sale size is used to measure firm size. The Brazilian firm size distribution is given by a log-normal distribution without any adjustable parameter. However, we also need to consider different parameters of log-normal distribution for the largest firms in the distribution, which are mostly foreign firms. The log-normal distribution has to be gradually truncated after a certain critical value for USA firms. Therefore, the original hypothesis of proportional effect proposed by Gibrat is valid with some modification for very large firms. We also consider the possible mechanisms behind this distribution. (c) 2006 Published by Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A bag at temperature (T) with pressure B(T) = B(0)[1 - (T/T(c))4] is shown to be consistent with recent lattice data on the pi and the rho mesons. The limiting temperature, T(l), of the pion bag from the Bekenstein entropy bound is lower than that of other mesons. This agrees with the thermal distribution of pi, K and the rho in heavy ion collisions, which (unlike proton-nucleus or pp data) show a marked difference in T of pion and other mesons in the mid-rapidity region.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tsallis postulated a generalized form for entropy and give rise to a new statistics now known as Tsallis statistics. In the present work, we compare the Tsallis statistics with the gradually truncated Levy flight, and discuss the distribution of an economical index-the Standard and Poor's 500-using the values of standard deviation as calculated by our model. We find that both statistics give almost the same distribution. Thus we feel that gradual truncation of Levy distribution, after certain critical step size for describing complex systems, is a requirement of generalized thermodynamics or similar. The gradually truncated Levy flight is based on physical considerations and bring a better physical picture of the dynamics of the whole system. Tsallis statistics gives a theoretical support. Both statistics together can be utilized for the development of a more exact portfolio theory or to understand better the complexities in human and financial behaviors. A comparison of both statistics is made. (C) 2002 Published by Elsevier B.V. B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Smart microgrids offer a new challenging domain for power theories and metering techniques because they include a variety of intermittent power sources which positively impact on power flow and distribution losses but may cause voltage asymmetry and frequency variation. In smart microgrids, the voltage distortion and asymmetry in presence of poly-phase nonlinear loads can be also greater than in usual distribution lines fed by the utility, thus affecting measurement accuracy and possibly causing tripping of protections. In such a context, a reconsideration of power theories is required since they form the basis for supply and load characterization. A revision of revenue metering techniques is also suggested to ensure a correct penalization of the loads for their responsibility in generating reactive power, voltage asymmetry, and distortion. This paper shows that the conservative power theory provides a suitable background to cope with smart grids characterization and metering needs. Simulation and experimental results show the properties of the proposed approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We defined generalized Heaviside functions for a variable x in R-n, and for variables (x, t) in R-n x R-m. Then study properties such as: composition, invertibility, and association relation (the weak equality). This work is developed in the Colombeau generalized functions context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The metal-insulator or metal-amorphous semiconductor blocking contact is still not well understood. Here, the intimate metal-insulator and metal-oxide-insulator contact are discussed. Further, the steady-state characteristics of metal-oxide-insulator-metal structures are also discussed. Oxide is an insulator with wider energy band gap (about 50 Å thick). A uniform energetic distribution of impurities is considered in addition to impurities at a single energy level inside the surface charge region at the oxide-insulator interface. Analytical expressions are presented for electrical potential, field, thickness of the depletion region, capacitance, and charge accumulated in the surface charge region. The electrical characteristics are compared with reference to relative densities of two types of impurities. ln I is proportional to the square root of applied potential if energetically distributed impurities are relatively important. However, distribution of the electrical potential is quite complicated. In general energetically distributed impurities can considerably change the electrical characteristics of these structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We discuss non-steady state electrical characteristics of a metal-insulator-metal structure. We consider an exponential distribution (in energy) of impurity states in addition to impurity states at a single energy level within the depletion region. We discuss thermal as well as isothermal characteristics and present an expression for the temperature of maximum current (Tm) and a method to calculate the density of exponentially distributed impurity states. We plot the theoretical curves for various sets of parameters and the variation of Tm, and Im (maximum current) with applied potential for various impurity distributions. The present model can explain the available experimental results. Finally we compare the non-steady state characteristics in three cases: (i) impurity states only at a single energy level, (ii) uniform energetic distribution of impurity states, and (iii) exponential energetic distribution of impurity states.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new methodology to evaluate in a predictive way the reliability of distribution systems, considering the impact of automatic recloser switches. The developed algorithm is based on state enumeration techniques with Markovian models and on the minimal cut set theory. Some computational aspects related with the implementation of the proposed algorithm in typical distribution networks are also discussed. The description of the proposed approach is carried out using a sample test system. The results obtained with a typical configuration of a Brazilian system (EDP Bandeirante Energia S.A.) are presented and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Throughout this article, it is assumed that the no-central chi-square chart with two stage samplings (TSS Chisquare chart) is employed to monitor a process where the observations from the quality characteristic of interest X are independent and identically normally distributed with mean μ and variance σ2. The process is considered to start with the mean and the variance on target (μ = μ0; σ2 = σ0 2), but at some random time in the future an assignable cause shifts the mean from μ0 to μ1 = μ0 ± δσ0, δ >0 and/or increases the variance from σ0 2 to σ1 2 = γ2σ0 2, γ > 1. Before the assignable cause occurrence, the process is considered to be in a state of statistical control (defined by the in-control state). Similar to the Shewhart charts, samples of size n 0+ 1 are taken from the process at regular time intervals. The samplings are performed in two stages. At the first stage, the first item of the i-th sample is inspected. If its X value, say Xil, is close to the target value (|Xil-μ0|< w0σ 0, w0>0), then the sampling is interrupted. Otherwise, at the second stage, the remaining n0 items are inspected and the following statistic is computed. Wt = Σj=2n 0+1(Xij - μ0 + ξiσ 0)2 i = 1,2 Let d be a positive constant then ξ, =d if Xil > 0 ; otherwise ξi =-d. A signal is given at sample i if |Xil-μ0| > w0σ 0 and W1 > knia:tl, where kChi is the factor used in determining the upper control limit for the non-central chi-square chart. If devices such as go and no-go gauges can be considered, then measurements are not required except when the sampling goes to the second stage. Let P be the probability of deciding that the process is in control and P 1, i=1,2, be the probability of deciding that the process is in control at stage / of the sampling procedure. Thus P = P1 + P 2 - P1P2, P1 = Pr[μ0 - w0σ0 ≤ X ≤ μ0+ w 0σ0] P2=Pr[W ≤ kChi σ0 2], (3) During the in-control period, W / σ0 2 is distributed as a non-central chi-square distribution with n0 degrees of freedom and a non-centrality parameter λ0 = n0d2, i.e. W / σ0 2 - xn0 22 (λ0) During the out-of-control period, W / σ1 2 is distributed as a non-central chi-square distribution with n0 degrees of freedom and a non-centrality parameter λ1 = n0(δ + ξ)2 / γ2 The effectiveness of a control chart in detecting a process change can be measured by the average run length (ARL), which is the speed with which a control chart detects process shifts. The ARL for the proposed chart is easily determined because in this case, the number of samples before a signal is a geometrically distributed random variable with parameter 1-P, that is, ARL = I /(1-P). It is shown that the performance of the proposed chart is better than the joint X̄ and R charts, Furthermore, if the TSS Chi-square chart is used for monitoring diameters, volumes, weights, etc., then appropriate devices, such as go-no-go gauges can be used to decide if the sampling should go to the second stage or not. When the process is stable, and the joint X̄ and R charts are in use, the monitoring becomes monotonous because rarely an X̄ or R value fall outside the control limits. The natural consequence is the user to pay less and less attention to the steps required to obtain the X̄ and R value. In some cases, this lack of attention can result in serious mistakes. The TSS Chi-square chart has the advantage that most of the samplings are interrupted, consequently, most of the time the user will be working with attributes. Our experience shows that the inspection of one item by attribute is much less monotonous than measuring four or five items at each sampling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Regulatory authorities in many countries, in order to maintain an acceptable balance between appropriate customer service qualities and costs, are introducing a performance-based regulation. These regulations impose penalties, and in some cases rewards, which introduce a component of financial risk to an electric power utility due to the uncertainty associated with preserving a specific level of system reliability. In Brazil, for instance, one of the reliability indices receiving special attention by the utilities is the Maximum Continuous Interruption Duration per customer (MCID). This paper describes a chronological Monte Carlo simulation approach to evaluate probability distributions of reliability indices, including the MCID, and the corresponding penalties. In order to get the desired efficiency, modern computational techniques are used for modeling (UML -Unified Modeling Language) as well as for programming (Object- Oriented Programming). Case studies on a simple distribution network and on real Brazilian distribution systems are presented and discussed. © Copyright KTH 2006.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Nailed Box Beam structural efficiency is directly dependent of the flange-web joint behavior, which determines the partial composition of the section, as the displacement between elements reduces the effective rigidity of the section and changes the stress distribution and the total displacement of the section. This work discusses the use of Nailed Plywood Box Beams in small span timber bridges, focusing on the reliability of the beam element. It is presented the results of tests carried out in 21 full scale Nailed Plywood Box Beams. The analysis of maximum load tests results shows that it presents a normal distribution, permitting the characteristic values calculation as the normal distribution theory specifies. The reliability of those elements was analyzed focusing on a timber bridge design, to estimate the failure probability in function of the load level.