973 resultados para Gaussian Probability Distribution
Resumo:
We investigate a conjecture on the cover times of planar graphs by means of large Monte Carlo simulations. The conjecture states that the cover time tau (G(N)) of a planar graph G(N) of N vertices and maximal degree d is lower bounded by tau (G(N)) >= C(d)N(lnN)(2) with C(d) = (d/4 pi) tan(pi/d), with equality holding for some geometries. We tested this conjecture on the regular honeycomb (d = 3), regular square (d = 4), regular elongated triangular (d = 5), and regular triangular (d = 6) lattices, as well as on the nonregular Union Jack lattice (d(min) = 4, d(max) = 8). Indeed, the Monte Carlo data suggest that the rigorous lower bound may hold as an equality for most of these lattices, with an interesting issue in the case of the Union Jack lattice. The data for the honeycomb lattice, however, violate the bound with the conjectured constant. The empirical probability distribution function of the cover time for the square lattice is also briefly presented, since very little is known about cover time probability distribution functions in general.
Resumo:
This work deals with the problem of minimizing the waste of space that occurs on a rotational placement of a set of irregular bi-dimensional items inside a bi-dimensional container. This problem is approached with a heuristic based on Simulated Annealing (SA) with adaptive neighborhood. The objective function is evaluated in a constructive approach, where the items are placed sequentially. The placement is governed by three different types of parameters: sequence of placement, the rotation angle and the translation. The rotation applied and the translation of the polygon are cyclic continuous parameters, and the sequence of placement defines a combinatorial problem. This way, it is necessary to control cyclic continuous and discrete parameters. The approaches described in the literature deal with only type of parameter (sequence of placement or translation). In the proposed SA algorithm, the sensibility of each continuous parameter is evaluated at each iteration increasing the number of accepted solutions. The sensibility of each parameter is associated to its probability distribution in the definition of the next candidate.
Resumo:
Simulated annealing (SA) is an optimization technique that can process cost functions with degrees of nonlinearities, discontinuities and stochasticity. It can process arbitrary boundary conditions and constraints imposed on these cost functions. The SA technique is applied to the problem of robot path planning. Three situations are considered here: the path is represented as a polyline; as a Bezier curve; and as a spline interpolated curve. In the proposed SA algorithm, the sensitivity of each continuous parameter is evaluated at each iteration increasing the number of accepted solutions. The sensitivity of each parameter is associated to its probability distribution in the definition of the next candidate. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Rectangular dropshafts, commonly used in sewers and storm water systems, are characterised by significant flow aeration. New detailed air-water flow measurements were conducted in a near-full-scale dropshaft at large discharges. In the shaft pool and outflow channel, the results demonstrated the complexity of different competitive air entrainment mechanisms. Bubble size measurements showed a broad range of entrained bubble sizes. Analysis of streamwise distributions of bubbles suggested further some clustering process in the bubbly flow although, in the outflow channel, bubble chords were in average smaller than in the shaft pool. A robust hydrophone was tested to measure bubble acoustic spectra and to assess its field application potential. The acoustic results characterised accurately the order of magnitude of entrained bubble sizes, but the transformation from acoustic frequencies to bubble radii did not predict correctly the probability distribution functions of bubble sizes.
Resumo:
Extracting human postural information from video sequences has proved a difficult research question. The most successful approaches to date have been based on particle filtering, whereby the underlying probability distribution is approximated by a set of particles. The shape of the underlying observational probability distribution plays a significant role in determining the success, both accuracy and efficiency, of any visual tracker. In this paper we compare approaches used by other authors and present a cost path approach which is commonly used in image segmentation problems, however is currently not widely used in tracking applications.
Resumo:
1. Although population viability analysis (PVA) is widely employed, forecasts from PVA models are rarely tested. This study in a fragmented forest in southern Australia contrasted field data on patch occupancy and abundance for the arboreal marsupial greater glider Petauroides volans with predictions from a generic spatially explicit PVA model. This work represents one of the first landscape-scale tests of its type. 2. Initially we contrasted field data from a set of eucalypt forest patches totalling 437 ha with a naive null model in which forecasts of patch occupancy were made, assuming no fragmentation effects and based simply on remnant area and measured densities derived from nearby unfragmented forest. The naive null model predicted an average total of approximately 170 greater gliders, considerably greater than the true count (n = 81). 3. Congruence was examined between field data and predictions from PVA under several metapopulation modelling scenarios. The metapopulation models performed better than the naive null model. Logistic regression showed highly significant positive relationships between predicted and actual patch occupancy for the four scenarios (P = 0.001-0.006). When the model-derived probability of patch occupancy was high (0.50-0.75, 0.75-1.00), there was greater congruence between actual patch occupancy and the predicted probability of occupancy. 4. For many patches, probability distribution functions indicated that model predictions for animal abundance in a given patch were not outside those expected by chance. However, for some patches the model either substantially over-predicted or under-predicted actual abundance. Some important processes, such as inter-patch dispersal, that influence the distribution and abundance of the greater glider may not have been adequately modelled. 5. Additional landscape-scale tests of PVA models, on a wider range of species, are required to assess further predictions made using these tools. This will help determine those taxa for which predictions are and are not accurate and give insights for improving models for applied conservation management.
Resumo:
We shall examine a model, first studied by Brockwell et al. [Adv Appl Probab 14 (1982) 709.], which can be used to describe the longterm behaviour of populations that are subject to catastrophic mortality or emigration events. Populations can suffer dramatic declines when disease, such as an introduced virus, affects the population, or when food shortages occur, due to overgrazing or fluctuations in rainfall. However, perhaps surprisingly, such populations can survive for long periods and, although they may eventually become extinct, they can exhibit an apparently stationary regime. It is useful to be able to model this behaviour. This is particularly true of the ecological examples that motivated the present study, since, in order to properly manage these populations, it is necessary to be able to predict persistence times and to estimate the conditional probability distribution of population size. We shall see that although our model predicts eventual extinction, the time till extinction can be long and the stationary exhibited by these populations over any reasonable time scale can be explained using a quasistationary distribution. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Modern real-time systems, with a more flexible and adaptive nature, demand approaches for timeliness evaluation based on probabilistic measures of meeting deadlines. In this context, simulation can emerge as an adequate solution to understand and analyze the timing behaviour of actual systems. However, care must be taken with the obtained outputs under the penalty of obtaining results with lack of credibility. Particularly important is to consider that we are more interested in values from the tail of a probability distribution (near worst-case probabilities), instead of deriving confidence on mean values. We approach this subject by considering the random nature of simulation output data. We will start by discussing well known approaches for estimating distributions out of simulation output, and the confidence which can be applied to its mean values. This is the basis for a discussion on the applicability of such approaches to derive confidence on the tail of distributions, where the worst-case is expected to be.
Resumo:
We consider a Bertrand duopoly model with unknown costs. The firms' aim is to choose the price of its product according to the well-known concept of Bayesian Nash equilibrium. The chooses are made simultaneously by both firms. In this paper, we suppose that each firm has two different technologies, and uses one of them according to a certain probability distribution. The use of either one or the other technology affects the unitary production cost. We show that this game has exactly one Bayesian Nash equilibrium. We analyse the advantages, for firms and for consumers, of using the technology with highest production cost versus the one with cheapest production cost. We prove that the expected profit of each firm increases with the variance of its production costs. We also show that the expected price of each good increases with both expected production costs, being the effect of the expected production costs of the rival dominated by the effect of the own expected production costs.
Resumo:
In this paper, we consider a Stackelberg duopoly competition with differentiated goods and with unknown costs. The firms' aim is to choose the output levels of their products according to the well-known concept of perfect Bayesian equilibrium. There is a firm ( F1 ) that chooses first the quantity 1 q of its good; the other firm ( F2 ) observes 1 q and then chooses the quantity 2 q of its good. We suppose that each firm has two different technologies, and uses one of them following a probability distribution. The use of either one or the other technology affects the unitary production cost. We show that there is exactly one perfect Bayesian equilibrium for this game. We analyse the advantages, for firms and for consumers, of using the technology with the highest production cost versus the one with the cheapest cost.
Resumo:
The conclusions of the Bertrand model of competition are substantially altered by the presence of either differentiated goods or asymmetric information about rival’s production costs. In this paper, we consider a Bertrand competition, with differentiated goods. Furthermore, we suppose that each firm has two different technologies, and uses one of them according to a certain probability distribution. The use of either one or the other technology affects the unitary production cost. We show that this game has exactly one Bayesian Nash equilibrium. We do ex-ante and ex-post analyses of firms’ profits and market prices. We prove that the expected profit of each firm increases with the variance of its production costs. We also show that the expected price of each good increases with both expected production costs, being the effect of the expected production costs of the rival dominated by the effect of the own expected production costs.
Resumo:
In this paper, we consider a Stackelberg duopoly competition with differentiated goods, linear and symmetric demand and with unknown costs. In our model, the two firms play a non-cooperative game with two stages: in a first stage, firm F 1 chooses the quantity, q 1, that is going to produce; in the second stage, firm F 2 observes the quantity q 1 produced by firm F 1 and chooses its own quantity q 2. Firms choose their output levels in order to maximise their profits. We suppose that each firm has two different technologies, and uses one of them following a certain probability distribution. The use of either one or the other technology affects the unitary production cost. We show that there is exactly one perfect Bayesian equilibrium for this game. We analyse the variations of the expected profits with the parameters of the model, namely with the parameters of the probability distributions, and with the parameters of the demand and differentiation.
Resumo:
We consider two firms, located in different countries, selling the same homogeneous good in both countries. In each country there is a non negative tariff on imports of the good produced in the other country. We suppose that each firm has two different technologies, and uses one of them according to a certain probability distribution. The use of either one or the other technology affects the unitary production cost. We analyse the effect of the production costs uncertainty on the profits of the firms and also on the welfare of the governments.
Resumo:
We consider a dynamic setting-price duopoly model in which a dominant (leader) firm moves first and a subordinate (follower) firm moves second. We suppose that each firm has two different technologies, and uses one of them according to a certain probability distribution. The use of either one or the other technology affects the unitary production cost. We analyse the effect of the production costs uncertainty on the profits of the firms, for different values of the intercept demand parameters.
Resumo:
Tese de Doutoramento em Ciência e Engenharia de Polímeros e Compósitos