140 resultados para Probabilistic Algorithms
Resumo:
Two new maximum power point tracking algorithms are presented: the input voltage sensor, and duty ratio maximum power point tracking algorithm (ViSD algorithm); and the output voltage sensor, and duty ratio maximum power point tracking algorithm (VoSD algorithm). The ViSD and VoSD algorithms have the features, characteristics and advantages of the incremental conductance algorithm (INC); but, unlike the incremental conductance algorithm which requires two sensors (the voltage sensor and current sensor), the two algorithms are more desirable because they require only one sensor: the voltage sensor. Moreover, the VoSD technique is less complex; hence, it requires less computational processing. Both the ViSD and the VoSD techniques operate by maximising power at the converter output, instead of the input. The ViSD algorithm uses a voltage sensor placed at the input of a boost converter, while the VoSD algorithm uses a voltage sensor placed at the output of a boost converter. © 2011 IEEE.
Resumo:
Bayesian formulated neural networks are implemented using hybrid Monte Carlo method for probabilistic fault identification in cylindrical shells. Each of the 20 nominally identical cylindrical shells is divided into three substructures. Holes of (12±2) mm in diameter are introduced in each of the substructures and vibration data are measured. Modal properties and the Coordinate Modal Assurance Criterion (COMAC) are utilized to train the two modal-property-neural-networks. These COMAC are calculated by taking the natural-frequency-vector to be an additional mode. Modal energies are calculated by determining the integrals of the real and imaginary components of the frequency response functions over bandwidths of 12% of the natural frequencies. The modal energies and the Coordinate Modal Energy Assurance Criterion (COMEAC) are used to train the two frequency-response-function-neural-networks. The averages of the two sets of trained-networks (COMAC and COMEAC as well as modal properties and modal energies) form two committees of networks. The COMEAC and the COMAC are found to be better identification data than using modal properties and modal energies directly. The committee approach is observed to give lower standard deviations than the individual methods. The main advantage of the Bayesian formulation is that it gives identities of damage and their respective confidence intervals.
Resumo:
This paper presents some developments in query expansion and document representation of our spoken document retrieval system and shows how various retrieval techniques affect performance for different sets of transcriptions derived from a common speech source. Modifications of the document representation are used, which combine several techniques for query expansion, knowledge-based on one hand and statistics-based on the other. Taken together, these techniques can improve Average Precision by over 19% relative to a system similar to that which we presented at TREC-7. These new experiments have also confirmed that the degradation of Average Precision due to a word error rate (WER) of 25% is quite small (3.7% relative) and can be reduced to almost zero (0.2% relative). The overall improvement of the retrieval system can also be observed for seven different sets of transcriptions from different recognition engines with a WER ranging from 24.8% to 61.5%. We hope to repeat these experiments when larger document collections become available, in order to evaluate the scalability of these techniques.
Resumo:
Algorithms are presented for detection and tracking of multiple clusters of co-ordinated targets. Based on a Markov chain Monte Carlo sampling mechanization, the new algorithms maintain a discrete approximation of the filtering density of the clusters' state. The filters' tracking efficiency is enhanced by incorporating various sampling improvement strategies into the basic Metropolis-Hastings scheme. Thus, an evolutionary stage consisting of two primary steps is introduced: 1) producing a population of different chain realizations, and 2) exchanging genetic material between samples in this population. The performance of the resulting evolutionary filtering algorithms is demonstrated in two different settings. In the first, both group and target properties are estimated whereas in the second, which consists of a very large number of targets, only the clustering structure is maintained. © 2009 IFAC.
Resumo:
Demodulation is an ill-posed problem whenever both carrier and envelope signals are broadband and unknown. Here, we approach this problem using the methods of probabilistic inference. The new approach, called Probabilistic Amplitude Demodulation (PAD), is computationally challenging but improves on existing methods in a number of ways. By contrast to previous approaches to demodulation, it satisfies five key desiderata: PAD has soft constraints because it is probabilistic; PAD is able to automatically adjust to the signal because it learns parameters; PAD is user-steerable because the solution can be shaped by user-specific prior information; PAD is robust to broad-band noise because this is modeled explicitly; and PAD's solution is self-consistent, empirically satisfying a Carrier Identity property. Furthermore, the probabilistic view naturally encompasses noise and uncertainty, allowing PAD to cope with missing data and return error bars on carrier and envelope estimates. Finally, we show that when PAD is applied to a bandpass-filtered signal, the stop-band energy of the inferred carrier is minimal, making PAD well-suited to sub-band demodulation. © 2006 IEEE.
Resumo:
This paper explores the current state-of-the-art in performance indicators and use of probabilistic approaches used in climate change impact studies. It presents a critical review of recent publications in this field, focussing on (1) metrics for energy use for heating and cooling, emissions, overheating and high-level performance aspects, and (2) uptake of uncertainty and risk analysis. This is followed by a case study, which is used to explore some of the contextual issues around the broader uptake of climate change impact studies in practice. The work concludes that probabilistic predictions of the impact of climate change are feasible, but only based on strict and explicitly stated assumptions. © 2011 Elsevier B.V. All rights reserved.
Resumo:
This article investigates how to use UK probabilistic climate-change projections (UKCP09) in rigorous building energy analysis. Two office buildings (deep plan and shallow plan) are used as case studies to demonstrate the application of UKCP09. Three different methods for reducing the computational demands are explored: statistical reduction (Finkelstein-Schafer [F-S] statistics), simplification using degree-day theory and the use of metamodels. The first method, which is based on an established technique, can be used as reference because it provides the most accurate information. However, it is necessary to automatically choose weather files based on F-S statistic by using computer programming language because thousands of weather files created from UKCP09 weather generator need to be processed. A combination of the second (degree-day theory) and third method (metamodels) requires only a relatively small number of simulation runs, but still provides valuable information to further implement the uncertainty and sensitivity analyses. The article also demonstrates how grid computing can be used to speed up the calculation for many independent EnergyPlus models by harnessing the processing power of idle desktop computers. © 2011 International Building Performance Simulation Association (IBPSA).