983 resultados para Probabilistic analysis
Resumo:
Deregulations and market practices in power industry have brought great challenges to the system planning area. In particular, they introduce a variety of uncertainties to system planning. New techniques are required to cope with such uncertainties. As a promising approach, probabilistic methods are attracting more and more attentions by system planners. In small signal stability analysis, generation control parameters play an important role in determining the stability margin. The objective of this paper is to investigate power system state matrix sensitivity characteristics with respect to system parameter uncertainties with analytical and numerical approaches and to identify those parameters have great impact on system eigenvalues, therefore, the system stability properties. Those identified parameter variations need to be investigated with priority. The results can be used to help Regional Transmission Organizations (RTOs) and Independent System Operators (ISOs) perform planning studies under the open access environment.
Resumo:
Web transaction data between Web visitors and Web functionalities usually convey user task-oriented behavior pattern. Mining such type of click-stream data will lead to capture usage pattern information. Nowadays Web usage mining technique has become one of most widely used methods for Web recommendation, which customizes Web content to user-preferred style. Traditional techniques of Web usage mining, such as Web user session or Web page clustering, association rule and frequent navigational path mining can only discover usage pattern explicitly. They, however, cannot reveal the underlying navigational activities and identify the latent relationships that are associated with the patterns among Web users as well as Web pages. In this work, we propose a Web recommendation framework incorporating Web usage mining technique based on Probabilistic Latent Semantic Analysis (PLSA) model. The main advantages of this method are, not only to discover usage-based access pattern, but also to reveal the underlying latent factor as well. With the discovered user access pattern, we then present user more interested content via collaborative recommendation. To validate the effectiveness of proposed approach, we conduct experiments on real world datasets and make comparisons with some existing traditional techniques. The preliminary experimental results demonstrate the usability of the proposed approach.
Resumo:
A new methodology is proposed for the analysis of generation capacity investment in a deregulated market environment. This methodology proposes to make the investment appraisal using a probabilistic framework. The probabilistic production simulation (PPC) algorithm is used to compute the expected energy generated, taking into account system load variations and plant forced outage rates, while the Monte Carlo approach has been applied to model the electricity price variability seen in a realistic network. The model is able to capture the price and hence the profitability uncertainties for generator companies. Seasonal variation in the electricity prices and the system demand are independently modeled. The method is validated on IEEE RTS system, augmented with realistic market and plant data, by using it to compare the financial viability of several generator investments applying either conventional or directly connected generator (powerformer) technologies. The significance of the results is assessed using several financial risk measures.
Resumo:
This paper presents a method to analyze the first order eigenvalue sensitivity with respect to the operating parameters of a power system. The method is based on explicitly expressing the system state matrix into sub-matrices. The eigenvalue sensitivity is calculated based on the explicitly formed system state matrix. The 4th order generator model and 4th order exciter system model are used to form the system state matrix. A case study using New England 10-machine 39-bus system is provided to demonstrate the effectiveness of the proposed method. This method can be applied into large scale power system eigenvalue sensitivity with respect to operating parameters.
Resumo:
Grid computing is an advanced technique for collaboratively solving complicated scientific problems using geographically and organisational dispersed computational, data storage and other recourses. Application of grid computing could provide significant benefits to all aspects of power system that involves using computers. Based on our previous research, this paper presents a novel grid computing approach for probabilistic small signal stability (PSSS) analysis in electric power systems with uncertainties. A prototype computing grid is successfully implemented in our research lab to carry out PSSS analysis on two benchmark systems. Comparing to traditional computing techniques, the gird computing has given better performances for PSSS analysis in terms of computing capacity, speed, accuracy and stability. In addition, a computing grid framework for power system analysis has been proposed based on the recent study.
Resumo:
Principal component analysis (PCA) is a ubiquitous technique for data analysis and processing, but one which is not based upon a probability model. In this paper we demonstrate how the principal axes of a set of observed data vectors may be determined through maximum-likelihood estimation of parameters in a latent variable model closely related to factor analysis. We consider the properties of the associated likelihood function, giving an EM algorithm for estimating the principal subspace iteratively, and discuss the advantages conveyed by the definition of a probability density function for PCA.
Resumo:
Principal component analysis (PCA) is a ubiquitous technique for data analysis and processing, but one which is not based upon a probability model. In this paper we demonstrate how the principal axes of a set of observed data vectors may be determined through maximum-likelihood estimation of parameters in a latent variable model closely related to factor analysis. We consider the properties of the associated likelihood function, giving an EM algorithm for estimating the principal subspace iteratively, and discuss the advantages conveyed by the definition of a probability density function for PCA.
Resumo:
Cloud computing is a new technological paradigm offering computing infrastructure, software and platforms as a pay-as-you-go, subscription-based service. Many potential customers of cloud services require essential cost assessments to be undertaken before transitioning to the cloud. Current assessment techniques are imprecise as they rely on simplified specifications of resource requirements that fail to account for probabilistic variations in usage. In this paper, we address these problems and propose a new probabilistic pattern modelling (PPM) approach to cloud costing and resource usage verification. Our approach is based on a concise expression of probabilistic resource usage patterns translated to Markov decision processes (MDPs). Key costing and usage queries are identified and expressed in a probabilistic variant of temporal logic and calculated to a high degree of precision using quantitative verification techniques. The PPM cost assessment approach has been implemented as a Java library and validated with a case study and scalability experiments. © 2012 Springer-Verlag Berlin Heidelberg.
Resumo:
Alpha oscillatory activity has long been associated with perceptual and cognitive processes related to attention control. The aim of this study is to explore the task-dependent role of alpha frequency in a lateralized visuo-spatial detection task. Specifically, the thesis focuses on consolidating the scientific literature's knowledge about the role of alpha frequency in perceptual accuracy, and deepening the understanding of what determines trial-by-trial fluctuations of alpha parameters and how these fluctuations influence overall task performance. The hypotheses, confirmed empirically, were that different implicit strategies are put in place based on the task context, in order to maximize performance with optimal resource distribution (namely alpha frequency, associated positively with performance): “Lateralization” of the attentive resources towards one hemifield should be associated with higher alpha frequency difference between contralateral and ipsilateral hemisphere; “Distribution” of the attentive resources across hemifields should be associated with lower alpha frequency difference between hemispheres; These strategies, used by the participants according to their brain capabilities, have proven themselves adaptive or maladaptive depending on the different tasks to which they have been set: "Distribution" of the attentive resources seemed to be the best strategy when the distribution probability between hemifields was balanced: i.e. the neutral condition task. "Lateralization" of the attentive resources seemed to be more effective when the distribution probability between hemifields was biased towards one hemifield: i.e., the biased condition task.
Resumo:
Stavskaya's model is a one-dimensional probabilistic cellular automaton (PCA) introduced in the end of the 1960s as an example of a model displaying a nonequilibrium phase transition. Although its absorbing state phase transition is well understood nowadays, the model never received a full numerical treatment to investigate its critical behavior. In this Brief Report we characterize the critical behavior of Stavskaya's PCA by means of Monte Carlo simulations and finite-size scaling analysis. The critical exponents of the model are calculated and indicate that its phase transition belongs to the directed percolation universality class of critical behavior, as would be expected on the basis of the directed percolation conjecture. We also explicitly establish the relationship of the model with the Domany-Kinzel PCA on its directed site percolation line, a connection that seems to have gone unnoticed in the literature so far.
Resumo:
Thanks to recent advances in molecular biology, allied to an ever increasing amount of experimental data, the functional state of thousands of genes can now be extracted simultaneously by using methods such as cDNA microarrays and RNA-Seq. Particularly important related investigations are the modeling and identification of gene regulatory networks from expression data sets. Such a knowledge is fundamental for many applications, such as disease treatment, therapeutic intervention strategies and drugs design, as well as for planning high-throughput new experiments. Methods have been developed for gene networks modeling and identification from expression profiles. However, an important open problem regards how to validate such approaches and its results. This work presents an objective approach for validation of gene network modeling and identification which comprises the following three main aspects: (1) Artificial Gene Networks (AGNs) model generation through theoretical models of complex networks, which is used to simulate temporal expression data; (2) a computational method for gene network identification from the simulated data, which is founded on a feature selection approach where a target gene is fixed and the expression profile is observed for all other genes in order to identify a relevant subset of predictors; and (3) validation of the identified AGN-based network through comparison with the original network. The proposed framework allows several types of AGNs to be generated and used in order to simulate temporal expression data. The results of the network identification method can then be compared to the original network in order to estimate its properties and accuracy. Some of the most important theoretical models of complex networks have been assessed: the uniformly-random Erdos-Renyi (ER), the small-world Watts-Strogatz (WS), the scale-free Barabasi-Albert (BA), and geographical networks (GG). The experimental results indicate that the inference method was sensitive to average degree k variation, decreasing its network recovery rate with the increase of k. The signal size was important for the inference method to get better accuracy in the network identification rate, presenting very good results with small expression profiles. However, the adopted inference method was not sensible to recognize distinct structures of interaction among genes, presenting a similar behavior when applied to different network topologies. In summary, the proposed framework, though simple, was adequate for the validation of the inferred networks by identifying some properties of the evaluated method, which can be extended to other inference methods.
Resumo:
Alternative splicing of gene transcripts greatly expands the functional capacity of the genome, and certain splice isoforms may indicate specific disease states such as cancer. Splice junction microarrays interrogate thousands of splice junctions, but data analysis is difficult and error prone because of the increased complexity compared to differential gene expression analysis. We present Rank Change Detection (RCD) as a method to identify differential splicing events based upon a straightforward probabilistic model comparing the over-or underrepresentation of two or more competing isoforms. RCD has advantages over commonly used methods because it is robust to false positive errors due to nonlinear trends in microarray measurements. Further, RCD does not depend on prior knowledge of splice isoforms, yet it takes advantage of the inherent structure of mutually exclusive junctions, and it is conceptually generalizable to other types of splicing arrays or RNA-Seq. RCD specifically identifies the biologically important cases when a splice junction becomes more or less prevalent compared to other mutually exclusive junctions. The example data is from different cell lines of glioblastoma tumors assayed with Agilent microarrays.
Resumo:
Fatigue and crack propagation are phenomena affected by high uncertainties, where deterministic methods fail to predict accurately the structural life. The present work aims at coupling reliability analysis with boundary element method. The latter has been recognized as an accurate and efficient numerical technique to deal with mixed mode propagation, which is very interesting for reliability analysis. The coupled procedure allows us to consider uncertainties during the crack growth process. In addition, it computes the probability of fatigue failure for complex structural geometry and loading. Two coupling procedures are considered: direct coupling of reliability and mechanical solvers and indirect coupling by the response surface method. Numerical applications show the performance of the proposed models in lifetime assessment under uncertainties, where the direct method has shown faster convergence than response surface method. (C) 2010 Elsevier Ltd. All rights reserved.