943 resultados para Markov model, enumeration method, maximum flow, minimum cut, contingency enumeration, frequency and duration, correlation, clustering, expected interruption cost


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The modern society has come to expect the electrical energy on demand, while many of the facilities in power systems are aging beyond repair and maintenance. The risk of failure is increasing with the aging equipments and can pose serious consequences for continuity of electricity supply. As the equipments used in high voltage power networks are very expensive, economically it may not be feasible to purchase and store spares in a warehouse for extended periods of time. On the other hand, there is normally a significant time before receiving equipment once it is ordered. This situation has created a considerable interest in the evaluation and application of probability methods for aging plant and provisions of spares in bulk supply networks, and can be of particular importance for substations. Quantitative adequacy assessment of substation and sub-transmission power systems is generally done using a contingency enumeration approach which includes the evaluation of contingencies, classification of the contingencies based on selected failure criteria. The problem is very complex because of the need to include detailed modelling and operation of substation and sub-transmission equipment using network flow evaluation and to consider multiple levels of component failures. In this thesis a new model associated with aging equipment is developed to combine the standard tools of random failures, as well as specific model for aging failures. This technique is applied in this thesis to include and examine the impact of aging equipments on system reliability of bulk supply loads and consumers in distribution network for defined range of planning years. The power system risk indices depend on many factors such as the actual physical network configuration and operation, aging conditions of the equipment, and the relevant constraints. The impact and importance of equipment reliability on power system risk indices in a network with aging facilities contains valuable information for utilities to better understand network performance and the weak links in the system. In this thesis, algorithms are developed to measure the contribution of individual equipment to the power system risk indices, as part of the novel risk analysis tool. A new cost worth approach was developed in this thesis that can make an early decision in planning for replacement activities concerning non-repairable aging components, in order to maintain a system reliability performance which economically is acceptable. The concepts, techniques and procedures developed in this thesis are illustrated numerically using published test systems. It is believed that the methods and approaches presented, substantially improve the accuracy of risk predictions by explicit consideration of the effect of equipment entering a period of increased risk of a non-repairable failure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION: Hip fractures are responsible for excessive mortality, decreasing the 5-year survival rate by about 20%. From an economic perspective, they represent a major source of expense, with direct costs in hospitalization, rehabilitation, and institutionalization. The incidence rate sharply increases after the age of 70, but it can be reduced in women aged 70-80 years by therapeutic interventions. Recent analyses suggest that the most efficient strategy is to implement such interventions in women at the age of 70 years. As several guidelines recommend bone mineral density (BMD) screening of postmenopausal women with clinical risk factors, our objective was to assess the cost-effectiveness of two screening strategies applied to elderly women aged 70 years and older. METHODS: A cost-effectiveness analysis was performed using decision-tree analysis and a Markov model. Two alternative strategies, one measuring BMD of all women, and one measuring BMD only of those having at least one risk factor, were compared with the reference strategy "no screening". Cost-effectiveness ratios were measured as cost per year gained without hip fracture. Most probabilities were based on data observed in EPIDOS, SEMOF and OFELY cohorts. RESULTS: In this model, which is mostly based on observed data, the strategy "screen all" was more cost effective than "screen women at risk." For one woman screened at the age of 70 and followed for 10 years, the incremental (additional) cost-effectiveness ratio of these two strategies compared with the reference was 4,235 euros and 8,290 euros, respectively. CONCLUSION: The results of this model, under the assumptions described in the paper, suggest that in women aged 70-80 years, screening all women with dual-energy X-ray absorptiometry (DXA) would be more effective than no screening or screening only women with at least one risk factor. Cost-effectiveness studies based on decision-analysis trees maybe useful tools for helping decision makers, and further models based on different assumptions should be performed to improve the level of evidence on cost-effectiveness ratios of the usual screening strategies for osteoporosis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A maximum likelihood estimator based on the coalescent for unequal migration rates and different subpopulation sizes is developed. The method uses a Markov chain Monte Carlo approach to investigate possible genealogies with branch lengths and with migration events. Properties of the new method are shown by using simulated data from a four-population n-island model and a source–sink population model. Our estimation method as coded in migrate is tested against genetree; both programs deliver a very similar likelihood surface. The algorithm converges to the estimates fairly quickly, even when the Markov chain is started from unfavorable parameters. The method was used to estimate gene flow in the Nile valley by using mtDNA data from three human populations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Thai written language is one of the languages that does not have word boundaries. In order to discover the meaning of the document, all texts must be separated into syllables, words, sentences, and paragraphs. This paper develops a novel method to segment the Thai text by combining a non-dictionary based technique with a dictionary-based technique. This method first applies the Thai language grammar rules to the text for identifying syllables. The hidden Markov model is then used for merging possible syllables into words. The identified words are verified with a lexical dictionary and a decision tree is employed to discover the words unidentified by the lexical dictionary. Documents used in the litigation process of Thai court proceedings have been used in experiments. The results which are segmented words, obtained by the proposed method outperform the results obtained by other existing methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The rapid increase in the deployment of CCTV systems has led to a greater demand for algorithms that are able to process incoming video feeds. These algorithms are designed to extract information of interest for human operators. During the past several years, there has been a large effort to detect abnormal activities through computer vision techniques. Typically, the problem is formulated as a novelty detection task where the system is trained on normal data and is required to detect events which do not fit the learned `normal' model. Many researchers have tried various sets of features to train different learning models to detect abnormal behaviour in video footage. In this work we propose using a Semi-2D Hidden Markov Model (HMM) to model the normal activities of people. The outliers of the model with insufficient likelihood are identified as abnormal activities. Our Semi-2D HMM is designed to model both the temporal and spatial causalities of the crowd behaviour by assuming the current state of the Hidden Markov Model depends not only on the previous state in the temporal direction, but also on the previous states of the adjacent spatial locations. Two different HMMs are trained to model both the vertical and horizontal spatial causal information. Location features, flow features and optical flow textures are used as the features for the model. The proposed approach is evaluated using the publicly available UCSD datasets and we demonstrate improved performance compared to other state of the art methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has been shown that the conventional practice of designing a compensated hot wire amplifier with a fixed ceiling to floor ratio results in considerable and unnecessary increase in noise level at compensation settings other than optimum (which is at the maximum compensation at the highest frequency of interest). The optimum ceiling to floor ratio has been estimated to be between 1.5-2.0 ωmaxM. Application of the above considerations to an amplifier in which the ceiling to floor ratio is optimized at each compensation setting (for a given amplifier band-width), shows the usefulness of the method in improving the signal to noise ratio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The generalizations of the Onsager model for the radial boundary layer and the Carrier-Maslen model for the end-cap axial boundary layer in a high-speed rotating cylinder are formulated for studying the secondary gas flow due to wall heating and due to insertion of mass, momentum and energy into the cylinder. The generalizations have wider applicability than the original Onsager and Carrier-Maslen models, because they are not restricted to the limit A >> 1, though they are restricted to the limit R e >> 1 and a high-aspect-ratio cylinder whose length/diameter ratio is large. Here, the stratification parameter A = root m Omega(2)R(2)/2k(B)T). This parameter A is the ratio of the peripheral speed, Omega R, to the most probable molecular speed, root 2k(B)T/m, the Reynolds number Re = rho w Omega R(2)/mu, where m is the molecular mass, Omega and R are the rotational speed and radius of the cylinder, k(B) is the Boltzmann constant, T is the gas temperature, rho(w) is the gas density at wall, and mu is the gas viscosity. In the case of wall forcing, analytical solutions are obtained for the sixth-order generalized Onsager equations for the master potential, and for the fourth-order generalized Carrier-Maslen equation for the velocity potential. For the case of mass/momentum/energy insertion into the flow, the separation-of-variables procedure is used, and the appropriate homogeneous boundary conditions are specified so that the linear operators in the axial and radial directions are self-adjoint. The discrete eigenvalues and eigenfunctions of the linear operators (sixth-order and second-order in the radial and axial directions for the Onsager equation, and fourth-order and second-order in the axial and radial directions for the Carrier-Maslen equation) are determined. These solutions are compared with direct simulation Monte Carlo (DSMC) simulations. The comparison reveals that the boundary conditions in the simulations and analysis have to be matched with care. The commonly used `diffuse reflection' boundary conditions at solid walls in DSMC simulations result in a non-zero slip velocity as well as a `temperature slip' (gas temperature at the wall is different from wall temperature). These have to be incorporated in the analysis in order to make quantitative predictions. In the case of mass/momentum/energy sources within the flow, it is necessary to ensure that the homogeneous boundary conditions are accurately satisfied in the simulations. When these precautions are taken, there is excellent agreement between analysis and simulations, to within 10 %, even when the stratification parameter is as low as 0.707, the Reynolds number is as low as 100 and the aspect ratio (length/diameter) of the cylinder is as low as 2, and the secondary flow velocity is as high as 0.2 times the maximum base flow velocity. The predictions of the generalized models are also significantly better than those of the original Onsager and Carrier-Maslen models, which are restricted to thin boundary layers in the limit of high stratification parameter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A theoretical analysis is carried out to observe the influence of important flow parameters such as Nusselt number and Sherwood number on the tip speed of an equiaxed dendrite growing in a convecting alloy melt. The effect of thermal and solutal transfer at the interface due to convection is equated to an undercooling of the melt, and an expression is derived for this equivalent undercooling in terms of the flow Nusselt number and Sherwood number. Results for the equivalent undercooling are compared with corresponding numerical values obtained by performing simulations based on the enthalpy method. This method represents a relatively simple procedure to analyze the effects of melt convection on the growth rate of dendrites. (C) 2013 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many problems in control and signal processing can be formulated as sequential decision problems for general state space models. However, except for some simple models one cannot obtain analytical solutions and has to resort to approximation. In this thesis, we have investigated problems where Sequential Monte Carlo (SMC) methods can be combined with a gradient based search to provide solutions to online optimisation problems. We summarise the main contributions of the thesis as follows. Chapter 4 focuses on solving the sensor scheduling problem when cast as a controlled Hidden Markov Model. We consider the case in which the state, observation and action spaces are continuous. This general case is important as it is the natural framework for many applications. In sensor scheduling, our aim is to minimise the variance of the estimation error of the hidden state with respect to the action sequence. We present a novel SMC method that uses a stochastic gradient algorithm to find optimal actions. This is in contrast to existing works in the literature that only solve approximations to the original problem. In Chapter 5 we presented how an SMC can be used to solve a risk sensitive control problem. We adopt the use of the Feynman-Kac representation of a controlled Markov chain flow and exploit the properties of the logarithmic Lyapunov exponent, which lead to a policy gradient solution for the parameterised problem. The resulting SMC algorithm follows a similar structure with the Recursive Maximum Likelihood(RML) algorithm for online parameter estimation. In Chapters 6, 7 and 8, dynamic Graphical models were combined with with state space models for the purpose of online decentralised inference. We have concentrated more on the distributed parameter estimation problem using two Maximum Likelihood techniques, namely Recursive Maximum Likelihood (RML) and Expectation Maximization (EM). The resulting algorithms can be interpreted as an extension of the Belief Propagation (BP) algorithm to compute likelihood gradients. In order to design an SMC algorithm, in Chapter 8 uses a nonparametric approximations for Belief Propagation. The algorithms were successfully applied to solve the sensor localisation problem for sensor networks of small and medium size.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A simple method based on the effective index method was used to estimate the minimum bend radii of curved SOI waveguides. An analytical formula was obtained to estimate the minimum radius of curvature at which the mode becomes cut off due to the side radiative loss.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bloom-forming and toxin-producing cyanobacteria remain a persistent nuisance across the world. Modelling cyanobacterial behaviour in freshwaters is an important tool for understanding their population dynamics and predicting the location and timing of the bloom events in lakes, reservoirs and rivers. A new deterministic–mathematical model was developed, which simulates the growth and movement of cyanobacterial blooms in river systems. The model focuses on the mathematical description of the bloom formation, vertical migration and lateral transport of colonies within river environments by taking into account the major factors that affect the cyanobacterial bloom formation in rivers including light, nutrients and temperature. A parameter sensitivity analysis using a one-at-a-time approach was carried out. There were two objectives of the sensitivity analysis presented in this paper: to identify the key parameters controlling the growth and movement patterns of cyanobacteria and to provide a means for model validation. The result of the analysis suggested that maximum growth rate and day length period were the most significant parameters in determining the population growth and colony depth, respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present a method for recognising an agent's behaviour in dynamic, noisy, uncertain domains, and across multiple levels of abstraction. We term this problem on-line plan recognition under uncertainty and view it generally as probabilistic inference on the stochastic process representing the execution of the agent's plan. Our contributions in this paper are twofold. In terms of probabilistic inference, we introduce the Abstract Hidden Markov Model (AHMM), a novel type of stochastic processes, provide its dynamic Bayesian network (DBN) structure and analyse the properties of this network. We then describe an application of the Rao-Blackwellised Particle Filter to the AHMM which allows us to construct an efficient, hybrid inference method for this model. In terms of plan recognition, we propose a novel plan recognition framework based on the AHMM as the plan execution model. The Rao-Blackwellised hybrid inference for AHMM can take advantage of the independence properties inherent in a model of plan execution, leading to an algorithm for online probabilistic plan recognition that scales well with the number of levels in the plan hierarchy. This illustrates that while stochastic models for plan execution can be complex, they exhibit special structures which, if exploited, can lead to efficient plan recognition algorithms. We demonstrate the usefulness of the AHMM framework via a behaviour recognition system in a complex spatial environment using distributed video surveillance data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Binary signatures have been widely used to detect malicious software on the current Internet. However, this approach is unable to achieve the accurate identification of polymorphic malware variants, which can be easily generated by the malware authors using code generation engines. Code generation engines randomly produce varying code sequences but perform the same desired malicious functions. Previous research used flow graph and signature tree to identify polymorphic malware families. The key difficulty of previous research is the generation of precisely defined state machine models from polymorphic variants. This paper proposes a novel approach, using Hierarchical Hidden Markov Model (HHMM), to provide accurate inductive inference of the malware family. This model can capture the features of self-similar and hierarchical structure of polymorphic malware family signature sequences. To demonstrate the effectiveness and efficiency of this approach, we evaluate it with real malware samples. Using more than 15,000 real malware, we find our approach can achieve high true positives, low false positives, and low computational cost.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Driving direction prediction can be useful in different applications such as driver warning and route recommendation. In this paper, a framework is proposed to predict the driving direction based on weighted Markov model. First the city POI (Point of Interesting) map is generated from trajectory data using weighted PageRank algorithm. Then, a weighted Markov model is trained for the near term driving direction prediction based on the POI map and historical trajectories. The experimental results on real-world data set indicate that the proposed method can improve the original Markov prediction model by 10% at some circumstances and 5% overall.