936 resultados para probabilistic ranking


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the time-variant reliability analysis of structures with random resistance or random system parameters. It deals with the problem of a random load process crossing a random barrier level. The implications of approximating the arrival rate of the first overload by an ensemble-crossing rate are studied. The error involved in this so-called ""ensemble-crossing rate"" approximation is described in terms of load process and barrier distribution parameters, and in terms of the number of load cycles. Existing results are reviewed, and significant improvements involving load process bandwidth, mean-crossing frequency and time are presented. The paper shows that the ensemble-crossing rate approximation can be accurate enough for problems where load process variance is large in comparison to barrier variance, but especially when the number of load cycles is small. This includes important practical applications like random vibration due to impact loadings and earthquake loading. Two application examples are presented, one involving earthquake loading and one involving a frame structure subject to wind and snow loadings. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ecological niche modelling combines species occurrence points with environmental raster layers in order to obtain models for describing the probabilistic distribution of species. The process to generate an ecological niche model is complex. It requires dealing with a large amount of data, use of different software packages for data conversion, for model generation and for different types of processing and analyses, among other functionalities. A software platform that integrates all requirements under a single and seamless interface would be very helpful for users. Furthermore, since biodiversity modelling is constantly evolving, new requirements are constantly being added in terms of functions, algorithms and data formats. This evolution must be accompanied by any software intended to be used in this area. In this scenario, a Service-Oriented Architecture (SOA) is an appropriate choice for designing such systems. According to SOA best practices and methodologies, the design of a reference business process must be performed prior to the architecture definition. The purpose is to understand the complexities of the process (business process in this context refers to the ecological niche modelling problem) and to design an architecture able to offer a comprehensive solution, called a reference architecture, that can be further detailed when implementing specific systems. This paper presents a reference business process for ecological niche modelling, as part of a major work focused on the definition of a reference architecture based on SOA concepts that will be used to evolve the openModeller software package for species modelling. The basic steps that are performed while developing a model are described, highlighting important aspects, based on the knowledge of modelling experts. In order to illustrate the steps defined for the process, an experiment was developed, modelling the distribution of Ouratea spectabilis (Mart.) Engl. (Ochnaceae) using openModeller. As a consequence of the knowledge gained with this work, many desirable improvements on the modelling software packages have been identified and are presented. Also, a discussion on the potential for large-scale experimentation in ecological niche modelling is provided, highlighting opportunities for research. The results obtained are very important for those involved in the development of modelling tools and systems, for requirement analysis and to provide insight on new features and trends for this category of systems. They can also be very helpful for beginners in modelling research, who can use the process and the experiment example as a guide to this complex activity. (c) 2008 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper features a tool to carry out environmental performance evaluation in highway rehabilitation works, as a component of environmental supervision activities. The procedure involves (i) evidence gathering by conducting technical inspections and reviewing environmental compliance reports, (ii) ranking nonconformities according to a proposed weighting framework, and (iii) calculation of an environmental conformity index. For the sake of testing and calibration, the procedure was applied to five road segments that were submitted to rehabilitation works in Sao Paulo State and the results were treated qualitatively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work deals with the problem of minimizing the waste of space that occurs on a rotational placement of a set of irregular bi-dimensional items inside a bi-dimensional container. This problem is approached with a heuristic based on Simulated Annealing (SA) with adaptive neighborhood. The objective function is evaluated in a constructive approach, where the items are placed sequentially. The placement is governed by three different types of parameters: sequence of placement, the rotation angle and the translation. The rotation applied and the translation of the polygon are cyclic continuous parameters, and the sequence of placement defines a combinatorial problem. This way, it is necessary to control cyclic continuous and discrete parameters. The approaches described in the literature deal with only type of parameter (sequence of placement or translation). In the proposed SA algorithm, the sensibility of each continuous parameter is evaluated at each iteration increasing the number of accepted solutions. The sensibility of each parameter is associated to its probability distribution in the definition of the next candidate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The procedure of on-line process control by attributes, known as Taguchi`s on-line process control, consists of inspecting the mth item (a single item) at every m produced items and deciding, at each inspection, whether the fraction of conforming items was reduced or not. If the inspected item is nonconforming, the production is stopped for adjustment. As the inspection system can be subject to diagnosis errors, one develops a probabilistic model that classifies repeatedly the examined item until a conforming or b non-conforming classification is observed. The first event that occurs (a conforming classifications or b non-conforming classifications) determines the final classification of the examined item. Proprieties of an ergodic Markov chain were used to get the expression of average cost of the system of control, which can be optimized by three parameters: the sampling interval of the inspections (m); the number of repeated conforming classifications (a); and the number of repeated non-conforming classifications (b). The optimum design is compared with two alternative approaches: the first one consists of a simple preventive policy. The production system is adjusted at every n produced items (no inspection is performed). The second classifies the examined item repeatedly r (fixed) times and considers it conforming if most classification results are conforming. Results indicate that the current proposal performs better than the procedure that fixes the number of repeated classifications and classifies the examined item as conforming if most classifications were conforming. On the other hand, the preventive policy can be averagely the most economical alternative rather than those ones that require inspection depending on the degree of errors and costs. A numerical example illustrates the proposed procedure. (C) 2009 Elsevier B. V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Susceptible-infective-removed (SIR) models are commonly used for representing the spread of contagious diseases. A SIR model can be described in terms of a probabilistic cellular automaton (PCA), where each individual (corresponding to a cell of the PCA lattice) is connected to others by a random network favoring local contacts. Here, this framework is employed for investigating the consequences of applying vaccine against the propagation of a contagious infection, by considering vaccination as a game, in the sense of game theory. In this game, the players are the government and the susceptible newborns. In order to maximize their own payoffs, the government attempts to reduce the costs for combating the epidemic, and the newborns may be vaccinated only when infective individuals are found in their neighborhoods and/or the government promotes an immunization program. As a consequence of these strategies supported by cost-benefit analysis and perceived risk, numerical simulations show that the disease is not fully eliminated and the government implements quasi-periodic vaccination campaigns. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are several ways of controlling the propagation of a contagious disease. For instance, to reduce the spreading of an airborne infection, individuals can be encouraged to remain in their homes and/or to wear face masks outside their domiciles. However, when a limited amount of masks is available, who should use them: the susceptible subjects, the infective persons or both populations? Here we employ susceptible-infective-recovered (SIR) models described in terms of ordinary differential equations and probabilistic cellular automata in order to investigate how the deletion of links in the random complex network representing the social contacts among individuals affects the dynamics of a contagious disease. The inspiration for this study comes from recent discussions about the impact of measures usually recommended by health public organizations for preventing the propagation of the swine influenza A (H1N1) virus. Our answer to this question can be valid for other eco-epidemiological systems. (C) 2010 Elsevier BM. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the spreading of contagious diseases in a population of constant size using susceptible-infective-recovered (SIR) models described in terms of ordinary differential equations (ODEs) and probabilistic cellular automata (PCA). In the PCA model, each individual (represented by a cell in the lattice) is mainly locally connected to others. We investigate how the topological properties of the random network representing contacts among individuals influence the transient behavior and the permanent regime of the epidemiological system described by ODE and PCA. Our main conclusions are: (1) the basic reproduction number (commonly called R(0)) related to a disease propagation in a population cannot be uniquely determined from some features of transient behavior of the infective group; (2) R(0) cannot be associated to a unique combination of clustering coefficient and average shortest path length characterizing the contact network. We discuss how these results can embarrass the specification of control strategies for combating disease propagations. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this paper is to present an economical design of an X chart for a short-run production. The process mean starts equal to mu(0) (in-control, State I) and in a random time it shifts to mu(1) > mu(0) (out-of-control, State II). The monitoring procedure consists of inspecting a single item at every m produced ones. If the measurement of the quality characteristic does not meet the control limits, the process is stopped, adjusted, and additional (r - 1) items are inspected retrospectively. The probabilistic model was developed considering only shifts in the process mean. A direct search technique is applied to find the optimum parameters which minimizes the expected cost function. Numerical examples illustrate the proposed procedure. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Expokit provides a set of routines aimed at computing matrix exponentials. More precisely, it computes either a small matrix exponential in full, the action of a large sparse matrix exponential on an operand vector, or the solution of a system of linear ODEs with constant inhomogeneity. The backbone of the sparse routines consists of matrix-free Krylov subspace projection methods (Arnoldi and Lanczos processes), and that is why the toolkit is capable of coping with sparse matrices of large dimension. The software handles real and complex matrices and provides specific routines for symmetric and Hermitian matrices. The computation of matrix exponentials is a numerical issue of critical importance in the area of Markov chains and furthermore, the computed solution is subject to probabilistic constraints. In addition to addressing general matrix exponentials, a distinct attention is assigned to the computation of transient states of Markov chains.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Standard tools for the analysis of economic problems involving uncertainty, including risk premiums, certainty equivalents and the notions of absolute and relative risk aversion, are developed without making specific assumptions on functional form beyond the basic requirements of monotonicity, transitivity, continuity, and the presumption that individuals prefer certainty to risk. Individuals are not required to display probabilistic sophistication. The approach relies on the distance and benefit functions to characterize preferences relative to a given state-contingent vector of outcomes. The distance and benefit functions are used to derive absolute and relative risk premiums and to characterize preferences exhibiting constant absolute risk aversion (CARA) and constant relative risk aversion (CRRA). A generalization of the notion of Schur-concavity is presented. If preferences are generalized Schur concave, the absolute and relative risk premiums are generalized Schur convex, and the certainty equivalents are generalized Schur concave.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sausage is a protein sequence threading program, but with remarkable run-time flexibility. Using different scripts, it can calculate protein sequence-structure alignments, search structure libraries, swap force fields, create models form alignments, convert file formats and analyse results. There are several different force fields which might be classed as knowledge-based, although they do not rely on Boltzmann statistics. Different force fields are used for alignment calculations and subsequent ranking of calculated models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This is an overview of the first burden of disease and injury studies carried out in Australia. Methods developed for the World Bank and World Health Organization Global Burden of Disease Study were adapted and applied to Australian population health data. Depression was found to be the top-ranking cause of non-fatal disease burden in Australia, causing 8% of the total years lost due to disability in 1996. Mental disorders overall were responsible for nearly 30% of the non-fatal disease burden. The leading causes of total disease burden (disability-adjusted life years [DALYs]) were ischaemic heart disease and stroke, together causing nearly 18% of the total disease burden. Depression was the fourth leading cause of disease burden, accounting for 3.7% of the total burden. Of the 10 major risk factors to which the disease burden can be attributed, tobacco smoking causes an estimated 10% of the total disease burden in Australia, followed by physical inactivity (7%).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, the minimum-order stable recursive filter design problem is proposed and investigated. This problem is playing an important role in pipeline implementation sin signal processing. Here, the existence of a high-order stable recursive filter is proved theoretically, in which the upper bound for the highest order of stable filters is given. Then the minimum-order stable linear predictor is obtained via solving an optimization problem. In this paper, the popular genetic algorithm approach is adopted since it is a heuristic probabilistic optimization technique and has been widely used in engineering designs. Finally, an illustrative example is sued to show the effectiveness of the proposed algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Between 1998 and 1999, a burden of disease assessment was carried out in Victoria, Australia applying and improving on the methods of the Global Burden of Disease Study. This paper describes the methods and results of the calculations of the burden due to 22 mental disorders, adding 14 conditions not included in previous burden of disease estimates, Methods: The National Survey of Mental Health and Wellbeing provided recent data on the occurrence of the major adult mental disorders in Australia. Data from international studies and expert advice further contributed to the construction of disease models, describing each condition in terms of incidence, average duration and level of severity, with adjustments for comorbidity with other mental disorders. Disability weights for the time spent in different states of mental ill health were borrowed mainly from a study in the Netherlands, supplemented by weights derived in a local extrapolation exercise. Results: Mental disorders were the third largest group of conditions contributing to the burden of disease in Victoria, ranking behind cancers and cardiovascular diseases. Depression was the greatest cause of disability in both men and women. Eight other mental disorders in men and seven in women ranked among the top twenty causes of disability. Conclusions: Insufficient information on the natural history of many of the mental disorders, the limited information on the validity of mental disorder diagnoses in community surveys and considerable differences between ICD-10 and DSM-IV defined diagnoses were the main concerns about the accuracy of the estimates. Similar and often greater concerns have been raised in relation to the estimation of the burden from common non-fatal physical conditions such as asthma, diabetes and osteoarthritis. In comparison, psychiatric epidemiology can boast greater scientific rigour in setting standards for population surveys.