876 resultados para Probabilistic Finite Automata
Resumo:
Monte Carlo simulations of a model for gamma-Fe2O3 (maghemite) single particle of spherical shape are presented aiming at the elucidation of the specific role played by the finite size and the surface on the anomalous magnetic behavior observed in small particle systems at low temperature. The influence of the finite-size effects on the equilibrium properties of extensive magnitudes, field coolings, and hysteresis loops is studied and compared to the results for periodic boundaries. It is shown that for the smallest sizes the thermal demagnetization of the surface completely dominates the magnetization while the behavior of the core is similar to that of the periodic boundary case, independently of D. The change in shape of the hysteresis loops with D demonstrates that the reversal mode is strongly influenced by the presence of broken links and disorder at the surface
Resumo:
The liquid-liquid critical point scenario of water hypothesizes the existence of two metastable liq- uid phases low-density liquid (LDL) and high-density liquid (HDL) deep within the supercooled region. The hypothesis originates from computer simulations of the ST2 water model, but the stabil- ity of the LDL phase with respect to the crystal is still being debated. We simulate supercooled ST2 water at constant pressure, constant temperature, and constant number of molecules N for N ≤ 729 and times up to 1 μs. We observe clear differences between the two liquids, both structural and dynamical. Using several methods, including finite-size scaling, we confirm the presence of a liquid-liquid phase transition ending in a critical point. We find that the LDL is stable with respect to the crystal in 98% of our runs (we perform 372 runs for LDL or LDL-like states), and in 100% of our runs for the two largest system sizes (N = 512 and 729, for which we perform 136 runs for LDL or LDL-like states). In all these runs, tiny crystallites grow and then melt within 1 μs. Only for N ≤ 343 we observe six events (over 236 runs for LDL or LDL-like states) of spontaneous crystal- lization after crystallites reach an estimated critical size of about 70 ± 10 molecules.
Resumo:
We argue that low-temperature effects in QED can, if anywhere, only be quantitatively interesting for bound electrons. Unluckily the dominant thermal contribution turns out to be level independent, so that it does not affect the frequency of the transition radiation.
Resumo:
We revisit the analytical properties of the static quasi-photon polarizability function for an electron gas at finite temperature, in connection with the existence of Friedel oscillations in the potential created by an impurity. In contrast with the zero temperature case, where the polarizability is an analytical function, except for the two branch cuts which are responsible for Friedel oscillations, at finite temperature the corresponding function is non analytical, in spite of becoming continuous everywhere on the complex plane. This effect produces, as a result, the survival of the oscillatory behavior of the potential. We calculate the potential at large distances, and relate the calculation to the non-analytical properties of the polarizability.
Resumo:
This paper derives the HJB (Hamilton-Jacobi-Bellman) equation for sophisticated agents in a finite horizon dynamic optimization problem with non-constant discounting in a continuous setting, by using a dynamic programming approach. A simple example is used in order to illustrate the applicability of this HJB equation, by suggesting a method for constructing the subgame perfect equilibrium solution to the problem.Conditions for the observational equivalence with an associated problem with constantdiscounting are analyzed. Special attention is paid to the case of free terminal time. Strotz¿s model (an eating cake problem of a nonrenewable resource with non-constant discounting) is revisited.
Resumo:
Aim Conservation strategies are in need of predictions that capture spatial community composition and structure. Currently, the methods used to generate these predictions generally focus on deterministic processes and omit important stochastic processes and other unexplained variation in model outputs. Here we test a novel approach of community models that accounts for this variation and determine how well it reproduces observed properties of alpine butterfly communities. Location The western Swiss Alps. Methods We propose a new approach to process probabilistic predictions derived from stacked species distribution models (S-SDMs) in order to predict and assess the uncertainty in the predictions of community properties. We test the utility of our novel approach against a traditional threshold-based approach. We used mountain butterfly communities spanning a large elevation gradient as a case study and evaluated the ability of our approach to model species richness and phylogenetic diversity of communities. Results S-SDMs reproduced the observed decrease in phylogenetic diversity and species richness with elevation, syndromes of environmental filtering. The prediction accuracy of community properties vary along environmental gradient: variability in predictions of species richness was higher at low elevation, while it was lower for phylogenetic diversity. Our approach allowed mapping the variability in species richness and phylogenetic diversity projections. Main conclusion Using our probabilistic approach to process species distribution models outputs to reconstruct communities furnishes an improved picture of the range of possible assemblage realisations under similar environmental conditions given stochastic processes and help inform manager of the uncertainty in the modelling results
Resumo:
The Multiscale Finite Volume (MsFV) method has been developed to efficiently solve reservoir-scale problems while conserving fine-scale details. The method employs two grid levels: a fine grid and a coarse grid. The latter is used to calculate a coarse solution to the original problem, which is interpolated to the fine mesh. The coarse system is constructed from the fine-scale problem using restriction and prolongation operators that are obtained by introducing appropriate localization assumptions. Through a successive reconstruction step, the MsFV method is able to provide an approximate, but fully conservative fine-scale velocity field. For very large problems (e.g. one billion cell model), a two-level algorithm can remain computational expensive. Depending on the upscaling factor, the computational expense comes either from the costs associated with the solution of the coarse problem or from the construction of the local interpolators (basis functions). To ensure numerical efficiency in the former case, the MsFV concept can be reapplied to the coarse problem, leading to a new, coarser level of discretization. One challenge in the use of a multilevel MsFV technique is to find an efficient reconstruction step to obtain a conservative fine-scale velocity field. In this work, we introduce a three-level Multiscale Finite Volume method (MlMsFV) and give a detailed description of the reconstruction step. Complexity analyses of the original MsFV method and the new MlMsFV method are discussed, and their performances in terms of accuracy and efficiency are compared.
Resumo:
This paper analyses and discusses arguments that emerge from a recent discussion about the proper assessment of the evidential value of correspondences observed between the characteristics of a crime stain and those of a sample from a suspect when (i) this latter individual is found as a result of a database search and (ii) remaining database members are excluded as potential sources (because of different analytical characteristics). Using a graphical probability approach (i.e., Bayesian networks), the paper here intends to clarify that there is no need to (i) introduce a correction factor equal to the size of the searched database (i.e., to reduce a likelihood ratio), nor to (ii) adopt a propositional level not directly related to the suspect matching the crime stain (i.e., a proposition of the kind 'some person in (outside) the database is the source of the crime stain' rather than 'the suspect (some other person) is the source of the crime stain'). The present research thus confirms existing literature on the topic that has repeatedly demonstrated that the latter two requirements (i) and (ii) should not be a cause of concern.
Resumo:
BACKGROUND: Articular surfaces reconstruction is essential in total shoulder arthroplasty. Because of the limited glenoid bone support, thin glenoid component could improve anatomical reconstruction, but adverse mechanical effects might appear. METHODS: With a numerical musculoskeletal shoulder model, we analysed and compared three values of thickness of a typical all-polyethylene glenoid component: 2, 4 (reference) and 6mm. A loaded movement of abduction in the scapular plane was simulated. We evaluated the humeral head translation, the muscle moment arms, the joint force, the articular contact pattern, and the polyethylene and cement stress. Findings Decreasing polyethylene thickness from 6 to 2mm slightly increased humeral head translation and muscle moment arms. This induced a small decreased of the joint reaction force, but important increase of stress within the polyethylene and the cement mantel. Interpretation The reference thickness of 4mm seems a good compromise to avoid stress concentration and joint stuffing.
Resumo:
The highway departments of the states which use integral abutments in bridge design were contacted in order to study the extent of integral abutment use in skewed bridges and to survey the different guidelines used for analysis and design of integral abutments in skewed bridges. The variation in design assumptions and pile orientations among the various states in their approach to the use of integral abutments on skewed bridges is discussed. The problems associated with the treatment of the approach slab, backfill, and pile cap, and the reason for using different pile orientations are summarized in the report. An algorithm based on a state-of-the-art nonlinear finite element procedure previously developed by the authors was modified and used to study the influence of different factors on behavior of piles in integral abutment bridges. An idealized integral abutment was introduced by assuming that the pile is rigidly cast into the pile cap and that the approach slab offers no resistance to lateral thermal expansion. Passive soil and shear resistance of the cap are neglected in design. A 40-foot H pile (HP 10 X 42) in six typical Iowa soils was analyzed for fully restrained pile head and pinned pile head. According to numerical results, the maximum safe length for fully restrained pile head is one-half the maximum safe length for pinned pile head. If the pile head is partially restrained, the maximum safe length will lie between the two limits. The numerical results from an investigation of the effect of predrilled oversized holes indicate that if the length of the predrilled oversized hole is at least 4 feet below the ground, the vertical load-carrying capacity of the H pile is only reduced by 10 percent for 4 inches of lateral displacement in very stiff clay. With no predrilled oversized hole, the pile failed before the 4-inch lateral displacement was reached. Thus, the maximum safe lengths for integral abutment bridges may be increased by predrilling. Four different typical Iowa layered soils were selected and used in this investigation. In certain situations, compacted soil (> 50 blow count in standard penetration tests) is used as fill on top of natural soil. The numerical results showed that the critical conditions will depend on the length of the compacted soil. If the length of the compacted soil exceeds 4 feet, the failure mechanism for the pile is similar to one in a layer of very stiff clay. That is, the vertical load-carrying capacity of the H pile will be greatly reduced as the specified lateral displacement increases.
Resumo:
A new model for dealing with decision making under risk by considering subjective and objective information in the same formulation is here presented. The uncertain probabilistic weighted average (UPWA) is also presented. Its main advantage is that it unifies the probability and the weighted average in the same formulation and considering the degree of importance that each case has in the analysis. Moreover, it is able to deal with uncertain environments represented in the form of interval numbers. We study some of its main properties and particular cases. The applicability of the UPWA is also studied and it is seen that it is very broad because all the previous studies that use the probability or the weighted average can be revised with this new approach. Focus is placed on a multi-person decision making problem regarding the selection of strategies by using the theory of expertons.
Resumo:
The multiscale finite-volume (MSFV) method has been derived to efficiently solve large problems with spatially varying coefficients. The fine-scale problem is subdivided into local problems that can be solved separately and are coupled by a global problem. This algorithm, in consequence, shares some characteristics with two-level domain decomposition (DD) methods. However, the MSFV algorithm is different in that it incorporates a flux reconstruction step, which delivers a fine-scale mass conservative flux field without the need for iterating. This is achieved by the use of two overlapping coarse grids. The recently introduced correction function allows for a consistent handling of source terms, which makes the MSFV method a flexible algorithm that is applicable to a wide spectrum of problems. It is demonstrated that the MSFV operator, used to compute an approximate pressure solution, can be equivalently constructed by writing the Schur complement with a tangential approximation of a single-cell overlapping grid and incorporation of appropriate coarse-scale mass-balance equations.
Resumo:
We describe the version of the GPT planner to be used in the planning competition. This version, called mGPT, solves mdps specified in the ppddllanguage by extracting and using different classes of lower bounds, along with various heuristic-search algorithms. The lower bounds are extracted from deterministic relaxations of the mdp where alternativeprobabilistic effects of an action are mapped into different, independent, deterministic actions. The heuristic-search algorithms, on the other hand, use these lower bounds for focusing the updates and delivering a consistent value function over all states reachable from the initial state with the greedy policy.