135 resultados para Probabilistic choice models

em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. The distribution of chemical abundances and their variation with time are important tools for understanding the chemical evolution of galaxies. In particular, the study of chemical evolution models can improve our understanding of the basic assumptions made when modelling our Galaxy and other spirals. Aims. We test a standard chemical evolution model for spiral disks in the Local Universe and study the influence of a threshold gas density and different efficiencies in the star formation rate (SFR) law on radial gradients of abundance, gas, and SFR. The model is then applied to specific galaxies. Methods. We adopt a one-infall chemical evolution model where the Galactic disk forms inside-out by means of infall of gas, and we test different thresholds and efficiencies in the SFR. The model is scaled to the disk properties of three Local Group galaxies (the Milky Way, M31 and M33) by varying its dependence on the star formation efficiency and the timescale for the infall of gas onto the disk. Results. Using this simple model, we are able to reproduce most of the observed constraints available in the literature for the studied galaxies. The radial oxygen abundance gradients and their time evolution are studied in detail. The present day abundance gradients are more sensitive to the threshold than to other parameters, while their temporal evolutions are more dependent on the chosen SFR efficiency. A variable efficiency along the galaxy radius can reproduce the present day gas distribution in the disk of spirals with prominent arms. The steepness in the distribution of stellar surface density differs from massive to lower mass disks, owing to the different star formation histories. Conclusions. The most massive disks seem to have evolved faster (i.e., with more efficient star formation) than the less massive ones, thus suggesting a downsizing in star formation for spirals. The threshold and the efficiency of star formation play a very important role in the chemical evolution of spiral disks. For instance, an efficiency varying with radius can be used to regulate the star formation. The oxygen abundance gradient can steepen or flatten in time depending on the choice of this parameter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fatigue and crack propagation are phenomena affected by high uncertainties, where deterministic methods fail to predict accurately the structural life. The present work aims at coupling reliability analysis with boundary element method. The latter has been recognized as an accurate and efficient numerical technique to deal with mixed mode propagation, which is very interesting for reliability analysis. The coupled procedure allows us to consider uncertainties during the crack growth process. In addition, it computes the probability of fatigue failure for complex structural geometry and loading. Two coupling procedures are considered: direct coupling of reliability and mechanical solvers and indirect coupling by the response surface method. Numerical applications show the performance of the proposed models in lifetime assessment under uncertainties, where the direct method has shown faster convergence than response surface method. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ecological niche modelling combines species occurrence points with environmental raster layers in order to obtain models for describing the probabilistic distribution of species. The process to generate an ecological niche model is complex. It requires dealing with a large amount of data, use of different software packages for data conversion, for model generation and for different types of processing and analyses, among other functionalities. A software platform that integrates all requirements under a single and seamless interface would be very helpful for users. Furthermore, since biodiversity modelling is constantly evolving, new requirements are constantly being added in terms of functions, algorithms and data formats. This evolution must be accompanied by any software intended to be used in this area. In this scenario, a Service-Oriented Architecture (SOA) is an appropriate choice for designing such systems. According to SOA best practices and methodologies, the design of a reference business process must be performed prior to the architecture definition. The purpose is to understand the complexities of the process (business process in this context refers to the ecological niche modelling problem) and to design an architecture able to offer a comprehensive solution, called a reference architecture, that can be further detailed when implementing specific systems. This paper presents a reference business process for ecological niche modelling, as part of a major work focused on the definition of a reference architecture based on SOA concepts that will be used to evolve the openModeller software package for species modelling. The basic steps that are performed while developing a model are described, highlighting important aspects, based on the knowledge of modelling experts. In order to illustrate the steps defined for the process, an experiment was developed, modelling the distribution of Ouratea spectabilis (Mart.) Engl. (Ochnaceae) using openModeller. As a consequence of the knowledge gained with this work, many desirable improvements on the modelling software packages have been identified and are presented. Also, a discussion on the potential for large-scale experimentation in ecological niche modelling is provided, highlighting opportunities for research. The results obtained are very important for those involved in the development of modelling tools and systems, for requirement analysis and to provide insight on new features and trends for this category of systems. They can also be very helpful for beginners in modelling research, who can use the process and the experiment example as a guide to this complex activity. (c) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We examine the representation of judgements of stochastic independence in probabilistic logics. We focus on a relational logic where (i) judgements of stochastic independence are encoded by directed acyclic graphs, and (ii) probabilistic assessments are flexible in the sense that they are not required to specify a single probability measure. We discuss issues of knowledge representation and inference that arise from our particular combination of graphs, stochastic independence, logical formulas and probabilistic assessments. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates probabilistic logics endowed with independence relations. We review propositional probabilistic languages without and with independence. We then consider graph-theoretic representations for propositional probabilistic logic with independence; complexity is analyzed, algorithms are derived, and examples are discussed. Finally, we examine a restricted first-order probabilistic logic that generalizes relational Bayesian networks. (c) 2007 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Valuation of projects for the preservation of water resources provides important information to policy makers and funding institutions. Standard contingent valuation models rely on distributional assumptions to provide welfare measures. Deviations from assumed and actual distribution of benefits are important when designing policies in developing countries, where inequality is a concern. This article applies semiparametric methods to obtain estimates of the benefit from a project for the preservation of an important Brazilian river basin. These estimates lead to significant differences from those obtained using the standard parametric approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Phylogenetic analyses of chloroplast DNA sequences, morphology, and combined data have provided consistent support for many of the major branches within the angiosperm, clade Dipsacales. Here we use sequences from three mitochondrial loci to test the existing broad scale phylogeny and in an attempt to resolve several relationships that have remained uncertain. Parsimony, maximum likelihood, and Bayesian analyses of a combined mitochondrial data set recover trees broadly consistent with previous studies, although resolution and support are lower than in the largest chloroplast analyses. Combining chloroplast and mitochondrial data results in a generally well-resolved and very strongly supported topology but the previously recognized problem areas remain. To investigate why these relationships have been difficult to resolve we conducted a series of experiments using different data partitions and heterogeneous substitution models. Usually more complex modeling schemes are favored regardless of the partitions recognized but model choice had little effect on topology or support values. In contrast there are consistent but weakly supported differences in the topologies recovered from coding and non-coding matrices. These conflicts directly correspond to relationships that were poorly resolved in analyses of the full combined chloroplast-mitochondrial data set. We suggest incongruent signal has contributed to our inability to confidently resolve these problem areas. (c) 2007 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel technique for selecting the poles of orthonormal basis functions (OBF) in Volterra models of any order is presented. It is well-known that the usual large number of parameters required to describe the Volterra kernels can be significantly reduced by representing each kernel using an appropriate basis of orthonormal functions. Such a representation results in the so-called OBF Volterra model, which has a Wiener structure consisting of a linear dynamic generated by the orthonormal basis followed by a nonlinear static mapping given by the Volterra polynomial series. Aiming at optimizing the poles that fully parameterize the orthonormal bases, the exact gradients of the outputs of the orthonormal filters with respect to their poles are computed analytically by using a back-propagation-through-time technique. The expressions relative to the Kautz basis and to generalized orthonormal bases of functions (GOBF) are addressed; the ones related to the Laguerre basis follow straightforwardly as a particular case. The main innovation here is that the dynamic nature of the OBF filters is fully considered in the gradient computations. These gradients provide exact search directions for optimizing the poles of a given orthonormal basis. Such search directions can, in turn, be used as part of an optimization procedure to locate the minimum of a cost-function that takes into account the error of estimation of the system output. The Levenberg-Marquardt algorithm is adopted here as the optimization procedure. Unlike previous related work, the proposed approach relies solely on input-output data measured from the system to be modeled, i.e., no information about the Volterra kernels is required. Examples are presented to illustrate the application of this approach to the modeling of dynamic systems, including a real magnetic levitation system with nonlinear oscillatory behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increasing efforts exist in integrating different levels of detail in models of the cardiovascular system. For instance, one-dimensional representations are employed to model the systemic circulation. In this context, effective and black-box-type decomposition strategies for one-dimensional networks are needed, so as to: (i) employ domain decomposition strategies for large systemic models (1D-1D coupling) and (ii) provide the conceptual basis for dimensionally-heterogeneous representations (1D-3D coupling, among various possibilities). The strategy proposed in this article works for both of these two scenarios, though the several applications shown to illustrate its performance focus on the 1D-1D coupling case. A one-dimensional network is decomposed in such a way that each coupling point connects two (and not more) of the sub-networks. At each of the M connection points two unknowns are defined: the flow rate and pressure. These 2M unknowns are determined by 2M equations, since each sub-network provides one (non-linear) equation per coupling point. It is shown how to build the 2M x 2M non-linear system with arbitrary and independent choice of boundary conditions for each of the sub-networks. The idea is then to solve this non-linear system until convergence, which guarantees strong coupling of the complete network. In other words, if the non-linear solver converges at each time step, the solution coincides with what would be obtained by monolithically modeling the whole network. The decomposition thus imposes no stability restriction on the choice of the time step size. Effective iterative strategies for the non-linear system that preserve the black-box character of the decomposition are then explored. Several variants of matrix-free Broyden`s and Newton-GMRES algorithms are assessed as numerical solvers by comparing their performance on sub-critical wave propagation problems which range from academic test cases to realistic cardiovascular applications. A specific variant of Broyden`s algorithm is identified and recommended on the basis of its computer cost and reliability. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the critical behaviour of a probabilistic mixture of cellular automata (CA) rules 182 and 200 (in Wolfram`s enumeration scheme) by mean-field analysis and Monte Carlo simulations. We found that as we switch off one CA and switch on the other by the variation of the single parameter of the model, the probabilistic CA (PCA) goes through an extinction-survival-type phase transition, and the numerical data indicate that it belongs to the directed percolation universality class of critical behaviour. The PCA displays a characteristic stationary density profile and a slow, diffusive dynamics close to the pure CA 200 point that we discuss briefly. Remarks on an interesting related stochastic lattice gas are addressed in the conclusions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show that the S parameter is not finite in theories of electroweak symmetry breaking in a slice of anti-de Sitter five-dimensional space, with the light fermions localized in the ultraviolet. We compute the one-loop contributions to S from the Higgs sector and show that they are logarithmically dependent on the cutoff of the theory. We discuss the renormalization of S, as well as the implications for bounds from electroweak precision measurements on these models. We argue that, although in principle the choice of renormalization condition could eliminate the S parameter constraint, a more consistent condition would still result in a large and positive S. On the other hand, we show that the dependence on the Higgs mass in S can be entirely eliminated by the renormalization procedure, making it impossible in these theories to extract a Higgs mass bound from electroweak precision constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The coexistence between different types of templates has been the choice solution to the information crisis of prebiotic evolution, triggered by the finding that a single RNA-like template cannot carry enough information to code for any useful replicase. In principle, confining d distinct templates of length L in a package or protocell, whose Survival depends on the coexistence of the templates it holds in, could resolve this crisis provided that d is made sufficiently large. Here we review the prototypical package model of Niesert et al. [1981. Origin of life between Scylla and Charybdis. J. Mol. Evol. 17, 348-353] which guarantees the greatest possible region of viability of the protocell population, and show that this model, and hence the entire package approach, does not resolve the information crisis. In particular, we show that the total information stored in a viable protocell (Ld) tends to a constant value that depends only on the spontaneous error rate per nucleotide of the template replication mechanism. As a result, an increase of d must be followed by a decrease of L, so that the net information gain is null. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study was to comparatively assess dental arch width, in the canine and molar regions, by means of direct measurements from plaster models, photocopies and digitized images of the models. The sample consisted of 130 pairs of plaster models, photocopies and digitized images of the models of white patients (n = 65), both genders, with Class I and Class II Division 1 malocclusions, treated by standard Edgewise mechanics and extraction of the four first premolars. Maxillary and mandibular intercanine and intermolar widths were measured by a calibrated examiner, prior to and after orthodontic treatment, using the three modes of reproduction of the dental arches. Dispersion of the data relative to pre- and posttreatment intra-arch linear measurements (mm) was represented as box plots. The three measuring methods were compared by one-way ANOVA for repeated measurements (α = 0.05). Initial / final mean values varied as follows: 33.94 to 34.29 mm / 34.49 to 34.66 mm (maxillary intercanine width); 26.23 to 26.26 mm / 26.77 to 26.84 mm (mandibular intercanine width); 49.55 to 49.66 mm / 47.28 to 47.45 mm (maxillary intermolar width) and 43.28 to 43.41 mm / 40.29 to 40.46 mm (mandibular intermolar width). There were no statistically significant differences between mean dental arch widths estimated by the three studied methods, prior to and after orthodontic treatment. It may be concluded that photocopies and digitized images of the plaster models provided reliable reproductions of the dental arches for obtaining transversal intra-arch measurements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dental impression is an important step in the preparation of prostheses since it provides the reproduction of anatomic and surface details of teeth and adjacent structures. The objective of this study was to evaluate the linear dimensional alterations in gypsum dies obtained with different elastomeric materials, using a resin coping impression technique with individual shells. A master cast made of stainless steel with fixed prosthesis characteristics with two prepared abutment teeth was used to obtain the impressions. References points (A, B, C, D, E and F) were recorded on the occlusal and buccal surfaces of abutments to register the distances. The impressions were obtained using the following materials: polyether, mercaptan-polysulfide, addition silicone, and condensation silicone. The transfer impressions were made with custom trays and an irreversible hydrocolloid material and were poured with type IV gypsum. The distances between identified points in gypsum dies were measured using an optical microscope and the results were statistically analyzed by ANOVA (p < 0.05) and Tukey's test. The mean of the distances were registered as follows: addition silicone (AB = 13.6 µm, CD=15.0 µm, EF = 14.6 µm, GH=15.2 µm), mercaptan-polysulfide (AB = 36.0 µm, CD = 36.0 µm, EF = 39.6 µm, GH = 40.6 µm), polyether (AB = 35.2 µm, CD = 35.6 µm, EF = 39.4 µm, GH = 41.4 µm) and condensation silicone (AB = 69.2 µm, CD = 71.0 µm, EF = 80.6 µm, GH = 81.2 µm). All of the measurements found in gypsum dies were compared to those of a master cast. The results demonstrated that the addition silicone provides the best stability of the compounds tested, followed by polyether, polysulfide and condensation silicone. No statistical differences were obtained between polyether and mercaptan-polysulfide materials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this study was to develop and validate equations to estimate the aboveground phytomass of a 30 years old plot of Atlantic Forest. In two plots of 100 m², a total of 82 trees were cut down at ground level. For each tree, height and diameter were measured. Leaves and woody material were separated in order to determine their fresh weights in field conditions. Samples of each fraction were oven dried at 80 °C to constant weight to determine their dry weight. Tree data were divided into two random samples. One sample was used for the development of the regression equations, and the other for validation. The models were developed using single linear regression analysis, where the dependent variable was the dry mass, and the independent variables were height (h), diameter (d) and d²h. The validation was carried out using Pearson correlation coefficient, paired t-Student test and standard error of estimation. The best equations to estimate aboveground phytomass were: lnDW = -3.068+2.522lnd (r² = 0.91; s y/x = 0.67) and lnDW = -3.676+0.951ln d²h (r² = 0.94; s y/x = 0.56).