165 resultados para distribution (probability theory)


Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper a new graph-theory and improved genetic algorithm based practical method is employed to solve the optimal sectionalizer switch placement problem. The proposed method determines the best locations of sectionalizer switching devices in distribution networks considering the effects of presence of distributed generation (DG) in fitness functions and other optimization constraints, providing the maximum number of costumers to be supplied by distributed generation sources in islanded distribution systems after possible faults. The proposed method is simulated and tested on several distribution test systems in both cases of with DG and non DG situations. The results of the simulations validate the proposed method for switch placement of the distribution network in the presence of distributed generation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We compare three alternative methods for eliciting retrospective confidence in the context of a simple perceptual task: the Simple Confidence Rating (a direct report on a numerical scale), the Quadratic Scoring Rule (a post-wagering procedure), and the Matching Probability (MP; a generalization of the no-loss gambling method). We systematically compare the results obtained with these three rules to the theoretical confidence levels that can be inferred from performance in the perceptual task using Signal Detection Theory (SDT). We find that the MP provides better results in that respect. We conclude that MP is particularly well suited for studies of confidence that use SDT as a theoretical framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the placement of sectionalizers, as well as, a cross-connection is optimally determined so that the objective function is minimized. The objective function employed in this paper consists of two main parts, the switch cost and the reliability cost. The switch cost is composed of the cost of sectionalizers and cross-connection and the reliability cost is assumed to be proportional to a reliability index, SAIDI. To optimize the allocation of sectionalizers and cross-connection problem realistically, the cost related to each element is considered as discrete. In consequence of binary variables for the availability of sectionalizers, the problem is extremely discrete. Therefore, the probability of local minimum risk is high and a heuristic-based optimization method is needed. A Discrete Particle Swarm Optimization (DPSO) is employed in this paper to deal with this discrete problem. Finally, a testing distribution system is used to validate the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many interesting phenomena have been observed in layers of granular materials subjected to vertical oscillations; these include the formation of a variety of standing wave patterns, and the occurrence of isolated features called oscillons, which alternately form conical heaps and craters oscillating at one-half of the forcing frequency. No continuum-based explanation of these phenomena has previously been proposed. We apply a continuum theory, termed the double-shearing theory, which has had success in analyzing various problems in the flow of granular materials, to the problem of a layer of granular material on a vertically vibrating rigid base undergoing vertical oscillations in plane strain. There exists a trivial solution in which the layer moves as a rigid body. By investigating linear perturbations of this solution, we find that at certain amplitudes and frequencies this trivial solution can bifurcate. The time dependence of the perturbed solution is governed by Mathieu’s equation, which allows stable, unstable and periodic solutions, and the observed period-doubling behaviour. Several solutions for the spatial velocity distribution are obtained; these include one in which the surface undergoes vertical velocities that have sinusoidal dependence on the horizontal space dimension, which corresponds to the formation of striped standing waves, and is one of the observed patterns. An alternative continuum theory of granular material mechanics, in which the principal axes of stress and rate-of-deformation are coincident, is shown to be incapable of giving rise to similar instabilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Surveillance for invasive non-indigenous species (NIS) is an integral part of a quarantine system. Estimating the efficiency of a surveillance strategy relies on many uncertain parameters estimated by experts, such as the efficiency of its components in face of the specific NIS, the ability of the NIS to inhabit different environments, and so on. Due to the importance of detecting an invasive NIS within a critical period of time, it is crucial that these uncertainties be accounted for in the design of the surveillance system. We formulate a detection model that takes into account, in addition to structured sampling for incursive NIS, incidental detection by untrained workers. We use info-gap theory for satisficing (not minimizing) the probability of detection, while at the same time maximizing the robustness to uncertainty. We demonstrate the trade-off between robustness to uncertainty, and an increase in the required probability of detection. An empirical example based on the detection of Pheidole megacephala on Barrow Island demonstrates the use of info-gap analysis to select a surveillance strategy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Network induced delay in networked control systems (NCS) is inherently non-uniformly distributed and behaves with multifractal nature. However, such network characteristics have not been well considered in NCS analysis and synthesis. Making use of the information of the statistical distribution of NCS network induced delay, a delay distribution based stochastic model is adopted to link Quality-of-Control and network Quality-of-Service for NCS with uncertainties. From this model together with a tighter bounding technology for cross terms, H∞ NCS analysis is carried out with significantly improved stability results. Furthermore, a memoryless H∞ controller is designed to stabilize the NCS and to achieve the prescribed disturbance attenuation level. Numerical examples are given to demonstrate the effectiveness of the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information Retrieval is an important albeit imperfect component of information technologies. A problem of insufficient diversity of retrieved documents is one of the primary issues studied in this research. This study shows that this problem leads to a decrease of precision and recall, traditional measures of information retrieval effectiveness. This thesis presents an adaptive IR system based on the theory of adaptive dual control. The aim of the approach is the optimization of retrieval precision after all feedback has been issued. This is done by increasing the diversity of retrieved documents. This study shows that the value of recall reflects this diversity. The Probability Ranking Principle is viewed in the literature as the “bedrock” of current probabilistic Information Retrieval theory. Neither the proposed approach nor other methods of diversification of retrieved documents from the literature conform to this principle. This study shows by counterexample that the Probability Ranking Principle does not in general lead to optimal precision in a search session with feedback (for which it may not have been designed but is actively used). Retrieval precision of the search session should be optimized with a multistage stochastic programming model to accomplish the aim. However, such models are computationally intractable. Therefore, approximate linear multistage stochastic programming models are derived in this study, where the multistage improvement of the probability distribution is modelled using the proposed feedback correctness method. The proposed optimization models are based on several assumptions, starting with the assumption that Information Retrieval is conducted in units of topics. The use of clusters is the primary reasons why a new method of probability estimation is proposed. The adaptive dual control of topic-based IR system was evaluated in a series of experiments conducted on the Reuters, Wikipedia and TREC collections of documents. The Wikipedia experiment revealed that the dual control feedback mechanism improves precision and S-recall when all the underlying assumptions are satisfied. In the TREC experiment, this feedback mechanism was compared to a state-of-the-art adaptive IR system based on BM-25 term weighting and the Rocchio relevance feedback algorithm. The baseline system exhibited better effectiveness than the cluster-based optimization model of ADTIR. The main reason for this was insufficient quality of the generated clusters in the TREC collection that violated the underlying assumption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis discusses various aspects of the integrity monitoring of GPS applied to civil aircraft navigation in different phases of flight. These flight phases include en route, terminal, non-precision approach and precision approach. The thesis includes four major topics: probability problem of GPS navigation service, risk analysis of aircraft precision approach and landing, theoretical analysis of Receiver Autonomous Integrity Monitoring (RAIM) techniques and RAIM availability, and GPS integrity monitoring at a ground reference station. Particular attention is paid to the mathematical aspects of the GPS integrity monitoring system. The research has been built upon the stringent integrity requirements defined by civil aviation community, and concentrates on the capability and performance investigation of practical integrity monitoring systems with rigorous mathematical and statistical concepts and approaches. Major contributions of this research are: • Rigorous integrity and continuity risk analysis for aircraft precision approach. Based on the joint probability density function of the affecting components, the integrity and continuity risks of aircraft precision approach with DGPS were computed. This advanced the conventional method of allocating the risk probability. • A theoretical study of RAIM test power. This is the first time a theoretical study on RAIM test power based on the probability statistical theory has been presented, resulting in a new set of RAIM criteria. • Development of a GPS integrity monitoring and DGPS quality control system based on GPS reference station. A prototype of GPS integrity monitoring and DGPS correction prediction system has been developed and tested, based on the A USN A V GPS base station on the roof of QUT ITE Building.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the stability analysis for a distribution static compensator (DSTATCOM) that operates in current control mode based on bifurcation theory. Bifurcations delimit the operating zones of nonlinear circuits and, hence, the capability to compute these bifurcations is of important interest for practical design. A control design for the DSTATCOM is proposed. Along with this control, a suitable mathematical representation of the DSTATCOM is proposed to carry out the bifurcation analysis efficiently. The stability regions in the Thevenin equivalent plane are computed for different power factors at the point of common coupling. In addition, the stability regions in the control gain space, as well as the contour lines for different Floquet multipliers are computed. It is demonstrated through bifurcation analysis that the loss of stability in the DSTATCOM is due to the emergence of a Neimark bifurcation. The observations are verified through simulation studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article deals with the non-linear oscillations assessment of a distribution static comensator ooperating in voltage control mode using the bifurcation theory. A mathematical model of the distribution static compensator in the voltage control mode to carry out the bifurcation analysis is derived. The stabiity regions in the Thevein equivalent plane are computed. In addition, the stability regions in the control gains space, as well as the contour lines for different Floquet multipliers are computed. The AC and DC capacitor impacts on the stability are analyzed through the bifurcation theory. The observations are verified through simulaation studies. The computation of the stability region allows the assessment of the stable operating zones for a power system that includes a distribution static compensator operating in the voltage mode.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article introduces a “pseudo classical” notion of modelling non-separability. This form of non-separability can be viewed as lying between separability and quantum-like non-separability. Non-separability is formalized in terms of the non-factorizabilty of the underlying joint probability distribution. A decision criterium for determining the non-factorizability of the joint distribution is related to determining the rank of a matrix as well as another approach based on the chi-square-goodness-of-fit test. This pseudo-classical notion of non-separability is discussed in terms of quantum games and concept combinations in human cognition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many of the power utilities around the world experienced spurious tripping of directional earth fault relays in their mesh distribution networks due to induced circulating currents. This circulating current is zero sequence and induced in the healthy circuit due to the zero sequence current flow resulting from a ground fault of a parallel circuit. This paper quantitatively discusses the effects of mutual coupling on earth fault protection of distribution systems. An actual spurious tripping event is analyzed to support the theory and to present options for improved resilience to spurious tripping.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A review of the literature related to issues involved in irrigation induced agricultural development (IIAD) reveals that: (1) the magnitude, sensitivity and distribution of social welfare of IIAD is not fully analysed; (2) the impacts of excessive pesticide use on farmers’ health are not adequately explained; (3) no analysis estimates the relationship between farm level efficiency and overuse of agro-chemical inputs under imperfect markets; and (4) the method of incorporating groundwater extraction costs is misleading. This PhD thesis investigates these issues by using primary data, along with secondary data from Sri Lanka. The overall findings of the thesis can be summarised as follows. First, the thesis demonstrates that Sri Lanka has gained a positive welfare change as a result of introducing new irrigation technology. The change in the consumer surplus is Rs.48,236 million, while the change in the producer surplus is Rs. 14,274 millions between 1970 and 2006. The results also show that the long run benefits and costs of IIAD depend critically on the magnitude of the expansion of the irrigated area, as well as the competition faced by traditional farmers (agricultural crowding out effects). The traditional sector’s ability to compete with the modern sector depends on productivity improvements, reducing production costs and future structural changes (spillover effects). Second, the thesis findings on pesticides used for agriculture show that, on average, a farmer incurs a cost of approximately Rs. 590 to 800 per month during a typical cultivation period due to exposure to pesticides. It is shown that the value of average loss in earnings per farmer for the ‘hospitalised’ sample is Rs. 475 per month, while it is approximately Rs. 345 per month for the ‘general’ farmers group during a typical cultivation season. However, the average willingness to pay (WTP) to avoid exposure to pesticides is approximately Rs. 950 and Rs. 620 for ‘hospitalised’ and ‘general’ farmers’ samples respectively. The estimated percentage contribution for WTP due to health costs, lost earnings, mitigating expenditure, and disutility are 29, 50, 5 and 16 per cent respectively for hospitalised farmers, while they are 32, 55, 8 and 5 per cent respectively for ‘general’ farmers. It is also shown that given market imperfections for most agricultural inputs, farmers are overusing pesticides with the expectation of higher future returns. This has led to an increase in inefficiency in farming practices which is not understood by the farmers. Third, it is found that various groundwater depletion studies in the economics literature have provided misleading optimal water extraction quantity levels. This is due to a failure to incorporate all production costs in the relevant models. It is only by incorporating quality changes to quantity deterioration, that it is possible to derive socially optimal levels. Empirical results clearly show that the benefits per hectare per month considering both the avoidance costs of deepening agro-wells by five feet from the existing average, as well as the avoidance costs of maintaining the water salinity level at 1.8 (mmhos/Cm), is approximately Rs. 4,350 for farmers in the Anuradhapura district and Rs. 5,600 for farmers in the Matale district.